In reinforcement learning (RL), rewards are scalar signals provided to an agent following its actions in a given state. They serve as feedback that directs the agent towards desirable behaviors within its environment. The primary objective of an RL agent is to maximize the total expected reward over time, often applying a discount factor to prioritize immediate rewards over those that are received later. By effectively leveraging these rewards, the agent can learn which actions lead to beneficial outcomes, thus refining its policy and enhancing its overall performance.