In ancient games such as chess or go, the most brilliant players can improve by studying the strategies produced by a machine. Robotic systems practice their own movements. In arcade games, agents capable of learning reach superhuman levels within a few hours. How do these spectacular reinforcement learning algorithms work?
With easy-to-understand explanations and clear examples in Java and Greenfoot, you can acquire the principles of reinforcement learning and apply them in your own intelligent agents. Greenfoot (M.Kölling, King's College London) and the hamster model (D. Bohles, University of Oldenburg) are simple but also powerful didactic tools that were developed to convey basic programming concepts.
The result is an accessible introduction into machine learning that concentrates on reinforcement learning. Taking the reader through the steps of developing intelligent agents, from the very basics to advanced aspects, touching on a variety of machine learning algorithms along the way, one is allowed to play along, experiment, and add their own ideas and experiments.
Author(s): Uwe Lorenz
Publisher: Springer
Year: 2022
Language: English
Pages: 194
City: Cham
Preface
Introduction
Contents
1: Reinforcement Learning as a Subfield of Machine Learning
1.1 Machine Learning as Automated Processing of Feedback from the Environment
1.2 Machine Learning
1.3 Reinforcement Learning with Java
Bibliography
2: Basic Concepts of Reinforcement Learning
2.1 Agents
2.2 The Policy of the Agent
2.3 Evaluation of States and Actions (Q-Function, Bellman Equation)
Bibliography
3: Optimal Decision-Making in a Known Environment
3.1 Value Iteration
3.1.1 Target-Oriented Condition Assessment (“Backward Induction”)
3.1.2 Policy-Based State Valuation (Reward Prediction)
3.2 Iterative Policy Search
3.2.1 Direct Policy Improvement
3.2.2 Mutual Improvement of Policy and Value Function
3.3 Optimal Policy in a Board Game Scenario
3.4 Summary
Bibliography
4: Decision-Making and Learning in an Unknown Environment
4.1 Exploration vs. Exploitation
4.2 Retroactive Processing of Experience (“Model-Free Reinforcement Learning”)
4.2.1 Goal-Oriented Learning (“Value-Based”)
Subsequent evaluation of complete episodes (“Monte Carlo” Method)
Immediate Valuation Using the Temporal Difference (Q- and SARSA Algorithm)
Consideration of the Action History (Eligibility Traces)
4.2.2 Policy Search
Monte Carlo Tactics Search
Evolutionary Strategies
Monte Carlo Policy Gradient (REINFORCE)
4.2.3 Combined Methods (Actor-Critic)
“Actor-Critic” Policy Gradients
Technical Improvements to the Actor-Critic Architecture
Feature Vectors and Partially Observable Environments
4.3 Exploration with Predictive Simulations (“Model-Based Reinforcement Learning”)
4.3.1 Dyna-Q
4.3.2 Monte Carlo Rollout
4.3.3 Artificial Curiosity
4.3.4 Monte Carlo Tree Search (MCTS)
4.3.5 Remarks on the Concept of Intelligence
4.4 Systematics of the Learning Methods
Bibliography
5: Artificial Neural Networks as Estimators for State Values and the Action Selection
5.1 Artificial Neural Networks
5.1.1 Pattern Recognition with the Perceptron
5.1.2 The Adaptability of Artificial Neural Networks
5.1.3 Backpropagation Learning
5.1.4 Regression with Multilayer Perceptrons
5.2 State Evaluation with Generalizing Approximations
5.3 Neural Estimators for Action Selection
5.3.1 Policy Gradient with Neural Networks
5.3.2 Proximal Policy Optimization
5.3.3 Evolutionary Strategy with a Neural Policy
Bibliography
6: Guiding Ideas in Artificial Intelligence over Time
6.1 Changing Guiding Ideas
6.2 On the Relationship Between Humans and Artificial Intelligence
Bibliography