In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You’ll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks.
Build a Large Language Model (from Scratch) teaches you how to:
• Plan and code all the parts of an LLM
• Prepare a dataset suitable for LLM training
• Fine-tune LLMs for text classification and with your own data
• Use human feedback to ensure your LLM follows instructions
• Load pretrained weights into an LLM
Build a Large Language Model (from Scratch) takes you inside the AI black box to tinker with the internal systems that power generative AI. As you work through each key stage of LLM creation, you’ll develop an in-depth understanding of how LLMs work, their limitations, and their customization methods. Your LLM can be developed on an ordinary laptop, and used as your own personal assistant.
About the technology
Physicist Richard P. Feynman reportedly said, “I don’t understand anything I can’t build.” Based on this same powerful principle, bestselling author Sebastian Raschka guides you step by step as you build a GPT-style LLM that you can run on your laptop. This is an engaging book that covers each stage of the process, from planning and coding to training and fine-tuning.
About the book
Build a Large Language Model (From Scratch) is a practical and eminently-satisfying hands-on journey into the foundations of generative AI. Without relying on any existing LLM libraries, you’ll code a base model, evolve it into a text classifier, and ultimately create a chatbot that can follow your conversational instructions. And you’ll really understand it because you built it yourself!
What's inside
• Plan and code an LLM comparable to GPT-2
• Load pretrained weights
• Construct a complete training pipeline
• Fine-tune your LLM for text classification
• Develop LLMs that follow human instructions
About the reader
Readers need intermediate Python skills and some knowledge of machine learning. The LLM you create will run on any modern laptop and can optionally utilize GPUs.
About the author
Sebastian Raschka is a Staff Research Engineer at Lightning AI, where he works on LLM research and develops open-source software.
Author(s): Sebastian Raschka
Edition: 1
Publisher: Manning
Year: 2024
Language: English
Commentary: Publisher's PDF | Published: October 29, 2024
Pages: 368
City: Shelter Island, NY
Tags: Machine Learning; Deep Learning; Natural Language Processing; Python; PyTorch; Text Processing; Attention Mechanisms; GPT; Large Language Models; LoRA
Build a Large Language Model (From Scratch)
brief contents
contents
preface
acknowledgments
about this book
Who should read this book
How this book is organized: A roadmap
About the code
liveBook discussion forum
Other online resources
about the author
about the cover illustration
1 Understanding large language models
1.1 What is an LLM?
1.2 Applications of LLMs
1.3 Stages of building and using LLMs
1.4 Introducing the transformer architecture
1.5 Utilizing large datasets
1.6 A closer look at the GPT architecture
1.7 Building a large language model
Summary
2 Working with text data
2.1 Understanding word embeddings
2.2 Tokenizing text
2.3 Converting tokens into token IDs
2.4 Adding special context tokens
2.5 Byte pair encoding
2.6 Data sampling with a sliding window
2.7 Creating token embeddings
2.8 Encoding word positions
Summary
3 Coding attention mechanisms
3.1 The problem with modeling long sequences
3.2 Capturing data dependencies with attention mechanisms
3.3 Attending to different parts of the input with self-attention
3.3.1 A simple self-attention mechanism without trainable weights
3.3.2 Computing attention weights for all input tokens
3.4 Implementing self-attention with trainable weights
3.4.1 Computing the attention weights step by step
3.4.2 Implementing a compact self-attention Python class
3.5 Hiding future words with causal attention
3.5.1 Applying a causal attention mask
3.5.2 Masking additional attention weights with dropout
3.5.3 Implementing a compact causal attention class
3.6 Extending single-head attention to multi-head attention
3.6.1 Stacking multiple single-head attention layers
3.6.2 Implementing multi-head attention with weight splits
Summary
4 Implementing a GPT model from scratch to generate text
4.1 Coding an LLM architecture
4.2 Normalizing activations with layer normalization
4.3 Implementing a feed forward network with GELU activations
4.4 Adding shortcut connections
4.5 Connecting attention and linear layers in a transformer block
4.6 Coding the GPT model
4.7 Generating text
Summary
5 Pretraining on unlabeled data
5.1 Evaluating generative text models
5.1.1 Using GPT to generate text
5.1.2 Calculating the text generation loss
5.1.3 Calculating the training and validation set losses
5.2 Training an LLM
5.3 Decoding strategies to control randomness
5.3.1 Temperature scaling
5.3.2 Top-k sampling
5.3.3 Modifying the text generation function
5.4 Loading and saving model weights in PyTorch
5.5 Loading pretrained weights from OpenAI
Summary
6 Fine-tuning for classification
6.1 Different categories of fine-tuning
6.2 Preparing the dataset
6.3 Creating data loaders
6.4 Initializing a model with pretrained weights
6.5 Adding a classification head
6.6 Calculating the classification loss and accuracy
6.7 Fine-tuning the model on supervised data
6.8 Using the LLM as a spam classifier
Summary
7 Fine-tuning to follow instructions
7.1 Introduction to instruction fine-tuning
7.2 Preparing a dataset for supervised instruction fine-tuning
7.3 Organizing data into training batches
7.4 Creating data loaders for an instruction dataset
7.5 Loading a pretrained LLM
7.6 Fine-tuning the LLM on instruction data
7.7 Extracting and saving responses
7.8 Evaluating the fine-tuned LLM
7.9 Conclusions
7.9.1 What’s next?
7.9.2 Staying up to date in a fast-moving field
7.9.3 Final words
Summary
appendix A—Introduction to PyTorch
A.1 What is PyTorch?
A.1.1 The three core components of PyTorch
A.1.2 Defining deep learning
A.1.3 Installing PyTorch
A.2 Understanding tensors
A.2.1 Scalars, vectors, matrices, and tensors
A.2.2 Tensor data types
A.2.3 Common PyTorch tensor operations
A.3 Seeing models as computation graphs
A.4 Automatic differentiation made easy
A.5 Implementing multilayer neural networks
A.6 Setting up efficient data loaders
A.7 A typical training loop
A.8 Saving and loading models
A.9 Optimizing training performance with GPUs
A.9.1 PyTorch computations on GPU devices
A.9.2 Single-GPU training
A.9.3 Training with multiple GPUs
Summary
appendix B—References and further reading
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Chapter 7
Appendix A
appendix C—Exercise solutions
Chapter 2
Exercise 2.1
Exercise 2.2
Chapter 3
Exercise 3.1
Exercise 3.2
Exercise 3.3
Chapter 4
Exercise 4.1
Exercise 4.2
Exercise 4.3
Chapter 5
Exercise 5.1
Exercise 5.2
Exercise 5.3
Exercise 5.4
Exercise 5.5
Exercise 5.6
Chapter 6
Exercise 6.1
Exercise 6.2
Exercise 6.3
Chapter 7
Exercise 7.1
Exercise 7.2
Exercise 7.3
Exercise 7.4
Appendix A
Exercise A.1
Exercise A.2
Exercise A.3
Exercise A.4
appendix D—Adding bells and whistles to the training loop
D.1 Learning rate warmup
D.2 Cosine decay
D.3 Gradient clipping
D.4 The modified training function
appendix E—Parameter-efficient fine-tuning with LoRA
E.1 Introduction to LoRA
E.2 Preparing the dataset
E.3 Initializing the model
E.4 Parameter-efficient fine-tuning with LoRA
index
Symbols
Numerics
A
B
C
D
E
F
G
I
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Z