Book description
Learn how to create, train, and tweak large language models (LLMs) by building one from the ground up!In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You’ll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks.
Build a Large Language Model (from Scratch) teaches you how to:
- Plan and code all the parts of an LLM
- Prepare a dataset suitable for LLM training
- Fine-tune LLMs for text classification and with your own data
- Use human feedback to ensure your LLM follows instructions
- Load pretrained weights into an LLM
Build a Large Language Model (from Scratch) takes you inside the AI black box to tinker with the internal systems that power generative AI. As you work through each key stage of LLM creation, you’ll develop an in-depth understanding of how LLMs work, their limitations, and their customization methods. Your LLM can be developed on an ordinary laptop, and used as your own personal assistant.
About the Technology
Physicist Richard P. Feynman reportedly said, “I don’t understand anything I can’t build.” Based on this same powerful principle, bestselling author Sebastian Raschka guides you step by step as you build a GPT-style LLM that you can run on your laptop. This is an engaging book that covers each stage of the process, from planning and coding to training and fine-tuning.
About the Book
Build a Large Language Model (From Scratch) is a practical and eminently-satisfying hands-on journey into the foundations of generative AI. Without relying on any existing LLM libraries, you’ll code a base model, evolve it into a text classifier, and ultimately create a chatbot that can follow your conversational instructions. And you’ll really understand it because you built it yourself!
What's Inside
- Plan and code an LLM comparable to GPT-2
- Load pretrained weights
- Construct a complete training pipeline
- Fine-tune your LLM for text classification
- Develop LLMs that follow human instructions
About the Reader
Readers need intermediate Python skills and some knowledge of machine learning. The LLM you create will run on any modern laptop and can optionally utilize GPUs.
About the Author
Sebastian Raschka is a Staff Research Engineer at Lightning AI, where he works on LLM research and develops open-source software.
The technical editor on this book was David Caswell.
Quotes
Truly inspirational! It motivates you to put your new skills into action.
- Benjamin Muskalla, Senior Engineer, GitHub
The most understandable and comprehensive explanation of language models yet! Its unique and practical teaching style achieves a level of understanding you can’t get any other way.
- Cameron Wolfe, Senior Scientist, Netflix
Sebastian combines deep knowledge with practical engineering skills and a knack for making complex ideas simple. This is the guide you need!
- Chip Huyen, author of Designing Machine Learning Systems and AI Engineering
Definitive, up-to-date coverage. Highly recommended!
- Dr. Vahid Mirjalili, Senior Data Scientist, FM Global
Table of contents
- copyright
- contents
- Build a Large Language Model (From Scratch)
- preface
- acknowledgments
- about this book
- about the author
- about the cover illustration
- 1 Understanding large language models
- 2 Working with text data
-
3 Coding attention mechanisms
- 3.1 The problem with modeling long sequences
- 3.2 Capturing data dependencies with attention mechanisms
- 3.3 Attending to different parts of the input with self-attention
- 3.4 Implementing self-attention with trainable weights
- 3.5 Hiding future words with causal attention
- 3.6 Extending single-head attention to multi-head attention
- 4 Implementing a GPT model from scratch to generate text
- 5 Pretraining on unlabeled data
-
6 Fine-tuning for classification
- 6.1 Different categories of fine-tuning
- 6.2 Preparing the dataset
- 6.3 Creating data loaders
- 6.4 Initializing a model with pretrained weights
- 6.5 Adding a classification head
- 6.6 Calculating the classification loss and accuracy
- 6.7 Fine-tuning the model on supervised data
- 6.8 Using the LLM as a spam classifier
-
7 Fine-tuning to follow instructions
- 7.1 Introduction to instruction fine-tuning
- 7.2 Preparing a dataset for supervised instruction fine-tuning
- 7.3 Organizing data into training batches
- 7.4 Creating data loaders for an instruction dataset
- 7.5 Loading a pretrained LLM
- 7.6 Fine-tuning the LLM on instruction data
- 7.7 Extracting and saving responses
- 7.8 Evaluating the fine-tuned LLM
- 7.9 Conclusions
-
appendix A Introduction to PyTorch
- A.1 What is PyTorch?
- A.2 Understanding tensors
- A.3 Seeing models as computation graphs
- A.4 Automatic differentiation made easy
- A.5 Implementing multilayer neural networks
- A.6 Setting up efficient data loaders
- A.7 A typical training loop
- A.8 Saving and loading models
- A.9 Optimizing training performance with GPUs
- appendix B References and further reading
- appendix C Exercise solutions
- appendix D Adding bells and whistles to the training loop
- appendix E Parameter-efficient fine-tuning with LoRA
Product information
- Title: Build a Large Language Model (From Scratch)
- Author(s):
- Release date: September 2024
- Publisher(s): Manning Publications
- ISBN: 9781633437166
You might also like
audiobook
Build a Large Language Model (From Scratch)
Learn how to create, train, and tweak large language models (LLMs) by building one from the …
book
Hands-On Large Language Models
AI has acquired startling new language capabilities in just the past few years. Driven by rapid …
book
Modern Generative AI with ChatGPT and OpenAI Models
Harness the power of AI with innovative, real-world applications, and unprecedented productivity boosts, powered by the …
video
Introduction to Transformer Models for NLP: Using BERT, GPT, and More to Solve Modern Natural Language Processing Tasks
10+ Hours of Video Instruction Learn how to apply state-of-the-art transformer-based models including BERT and GPT …