5.3.2 - Objective of MDPs
Enroll to start learning
You’ve not yet enrolled in this course. Please enroll for free to listen to audio lessons, classroom podcasts and take practice test.
Practice Questions
Test your understanding with targeted questions
What does policy (π) refer to in the context of MDPs?
💡 Hint: Think about how decisions are made from different situations.
Why is the discount factor (γ) important?
💡 Hint: Consider how immediate choices affect long-term outcomes.
4 more questions available
Interactive Quizzes
Quick quizzes to reinforce your learning
What is the primary objective of MDPs?
💡 Hint: Think about what you'd want to achieve in an uncertain environment.
True or False: The discount factor γ must always be between 0 and 1.
💡 Hint: Recall the mathematical definition of γ.
2 more questions available
Challenge Problems
Push your limits with advanced challenges
Consider an MDP with two states, A and B. The action taken in state A leads to state B with a 70% chance and remains in A with a 30% chance. If the rewards are 5 for reaching B and 1 for remaining in A, calculate the expected utility for taking the action in state A.
💡 Hint: Consider how probabilities of transitions impact rewards.
If an agent's discount factor is 0.9, and the immediate rewards for actions are 4 and 6, what would be the expected utility considering one future reward of 10 from the current state?
💡 Hint: Remember, the discount factor reduces the value of future rewards.
Get performance evaluation
Reference links
Supplementary resources to enhance your learning experience.