This course is your perfect entry point into the exciting field of Reinforcement Learning where digital Artificial Intelligence agents are built to automatically learn how to make sequential decisions through trial-and-error. Specifically, this course focuses on the Multi-Armed Bandit problems and the practical hands-on implementation of various algorithmic strategies for balancing between exploration and exploitation. Whenever you desire to consistently make the best choice out of a limited number of options over time, you are dealing with a Multi-Armed Bandit problem and this course teaches you every detail you need to know to be able to build realistic business agents to handle such situations.
With very concise explanations, this course teaches you how to confidently translate seemingly scary mathematical formulas into Python code painlessly. We understand that not many of us are technically adept in the subject of mathematics so this course intentionally stays away from maths unless it is necessary. And even when it becomes necessary to talk about mathematics, the approach taken in this course is such that anyone with basic algebra skills can understand and most importantly easily translate the maths into code and build useful intuitions in the process.
Some of the algorithmic strategies taught in this course are Epsilon Greedy, Softmax Exploration, Optimistic Initialization, Upper Confidence Bounds, and Thompson Sampling. With these tools under your belt, you are adequately equipped to readily build and deploy AI agents that can handle critical business operations under uncertainties.
To bridge the gap between theory and application, I've updated this course to include a section where I show how to apply the MAB algorithms in Robotics using the EV3 Mindstorm. I'll soon upload a section that will show how to apply the algorithms taught in this course to optimize advertisements.