This book surveys current and future approaches to generating video game content with machine learning or Procedural Content Generation via Machine Learning (PCGML). Machine learning is having a major impact on many industries, including the video game industry. PCGML addresses the use of computers to generate new types of content for video games (game levels, quests, characters, etc.) by learning from existing content. The authors illustrate how PCGML is poised to transform the video games industry and provide the first ever beginner-focused guide to PCGML. This book features an accessible introduction to machine learning topics, and readers will gain a broad understanding of currently employed PCGML approaches in academia and industry. The authors provide guidance on how best to set up a PCGML project and identify open problems appropriate for a research project or thesis. This book is written with machine learning and games novices in mind and includes discussions of practical and ethical considerations along with resources and guidance for starting a new PCGML project.
Author(s): Matthew Guzdial, Sam Snodgrass, Adam J. Summerville
Series: Synthesis Lectures on Games and Computational Intelligence
Publisher: Springer
Year: 2022
Language: English
Pages: 245
City: Cham
Preface
Acknowledgments
Contents
About the Authors
1 Introduction
[DELETE]
1.1 Procedural Content Generation
1.2 Machine Learning
1.3 History of PCGML
1.4 Who is this Book For?
1.5 Who is this Book Not For?
1.6 Book Outline
2 Classical PCG
[DELETE]
2.1 What is Content?
2.2 Constructive Approaches
2.2.1 Noise
2.2.2 Rules
2.2.3 Grammars
2.3 Constraint-Based Approaches
2.4 Search-Based Approaches
2.4.1 Evolutionary PCG
2.4.2 Quality-Diversity PCG
2.5 Takeaways
3 An Introduction of ML Through PCG
[DELETE]
3.1 Data and Hypothesis Space
3.2 Loss Criterion
3.3 Underfitting and Overfitting/Variance and Bias
3.4 Takeaways
4 PCGML Process Overview
[DELETE]
4.1 Produce or Acquire Training Data
4.1.1 Existing Training Data
4.1.2 Producing Training Data
4.2 Train the Model
4.2.1 Output Size
4.2.2 Representation Complexity
4.2.3 Train, Validation, and Test Splits
4.3 Generate Content
4.3.1 Exploration vs. Exploitation in Generation
4.3.2 Postprocessing
4.4 Evaluate the Output
4.5 Takeaways
5 Constraint-Based PCGML Approaches
[DELETE]
5.1 Learning Platformer Level Constraints
5.2 Learning Quest Constraints
5.3 WaveFunctionCollapse
5.3.1 Extract
5.3.2 Observe
5.3.3 Propagate
5.3.4 Extending WaveFunctionCollapse
5.4 Takeaways
6 Probabilistic PCGML Approaches
6.1 What are Probabilities?
6.1.1 Learning Platformer Level Probabilities
6.2 What are Conditional Probabilities?
6.2.1 Learning Platformer Level Conditional Probabilities
6.3 Markov Models
6.3.1 Markov Chains
6.3.2 Multi-dimensional Markov Chains
6.3.3 Markov Random Fields
6.3.4 Other Markov Models
6.4 Bayesian Networks
6.5 Latent Variables
6.5.1 Clustering
6.6 Takeaways
7 Neural Networks—Introduction
7.1 Stochastic Gradient Descent
7.2 Activation Functions
7.3 Artificial Neural Networks
7.4 Case Study: NN 2D Markov Chain
7.5 Case Study: NN 1D Regression Markov Chain
7.6 Case Study: NN 2D AutoEncoder
7.7 Takeaways
8 Sequence-Based DNN PCGML
8.1 Recurrent Neural Networks
8.2 Gated Recurrent Unit and Long Short-Term Memory RNNs
8.2.1 Long Short-Term Memory RNNs
8.3 Sequence-Based Case Study—Card Generation
8.4 Sequence-to-Sequence Recurrent Neural Networks
8.5 Transformer Models
8.5.1 Case Study—Sequence to Sequence Transformer for Card Generation
8.6 Practical Considerations
8.7 Takeaways
9 Grid-Based DNN PCGML
9.1 Convolutions
9.2 Padding and Stride Behavior
9.3 Generative Adversarial Networks
9.4 Practical Considerations
9.5 Case Study—CNN Variational Autoencoder for Level Generation
9.6 Case Study—GANs for Sprite Generation
9.7 Takeaways
10 Reinforcement Learning PCG
10.1 One-Armed Bandits
10.2 Pixel Art Example
10.3 Markov Decision Process (MDP)
10.4 MDP Example
10.5 Tabular Q-Learning
10.5.1 Rollout Example
10.5.2 Q-Update
10.5.3 Q-Update Example
10.5.4 Rollout Example 2
10.5.5 Tabular Q-learning Wrap-up
10.6 Deep Q-Learning
10.7 Application Examples
10.8 Takeaways
11 Mixed-Initiative PCGML
[DELETE]
11.1 Existing PCG Tools in the Wild
11.1.1 Classical PCG Tools
11.1.2 Microsoft FlightSim
11.1.3 Puzzle-Maker
11.2 Structuring the Interaction
11.2.1 Integrating with the PCGML Pipeline
11.2.2 Understanding the Model
11.2.3 Understanding the User
11.3 Design Axes
11.3.1 AI vs. User Autonomy
11.3.2 Static vs. Dynamic Model Systems
11.4 Takeaways
12 Open Problems
[DELETE]
12.1 Identifying Open Problems
12.2 Problem Formulation
12.2.1 Underexplored Content Types
12.2.2 Novel Content Generation
12.2.3 Controllability
12.3 Input
12.3.1 Data Sources
12.3.2 Representations
12.3.3 Data Augmentation
12.4 Models and Training
12.5 Output
12.5.1 Applications
12.5.2 Evaluation
12.6 Discussion
13 Resources and Conclusions
[DELETE]
13.1 PCGML Resources
13.1.1 Other Textbooks
13.1.2 Code Repositories
13.1.3 Libraries
13.1.4 Datasets
13.1.5 Competitions and Jams
13.1.6 Venues
13.1.7 Social Media
13.2 Conclusions
References