Book description
The advancement of Large Language Models (LLMs) has revolutionized the field of Natural Language Processing in recent years. Models like BERT, T5, and ChatGPT have demonstrated unprecedented performance on a wide range of NLP tasks, from text classification to machine translation. Despite their impressive performance, the use of LLMs remains challenging for many practitioners. The sheer size of these models, combined with the lack of understanding of their inner workings, has made it difficult for practitioners to effectively use and optimize these models for their specific needs.
Quick Start Guide to Large Language Models: Strategies and Best Practices for using ChatGPT and Other LLMs is a practical guide to the use of LLMs in NLP. It provides an overview of the key concepts and techniques used in LLMs and explains how these models work and how they can be used for various NLP tasks. The book also covers advanced topics, such as fine-tuning, alignment, and information retrieval while providing practical tips and tricks for training and optimizing LLMs for specific NLP tasks.
This work addresses a wide range of topics in the field of Large Language Models, including the basics of LLMs, launching an application with proprietary models, fine-tuning GPT3 with custom examples, prompt engineering, building a recommendation engine, combining Transformers, and deploying custom LLMs to the cloud. It offers an in-depth look at the various concepts, techniques, and tools used in the field of Large Language Models.
Related Learning:
- Book: Quick Start Guide to Large Language Models
- Video: Quick Guide to ChatGPT, Embeddings and Other Large Language Models
- Video: Introduction to Transformer Models for NLP
- Live Training: https://learning.oreilly.com/search/?q=sinan%20ozdemir&type=live-event-series
- Audio: AI Unveiled
Topics covered:
Coding with Large Language Models (LLMs)
Overview of using proprietary models
OpenAI, Embeddings, GPT3, and ChatGPT
Vector databases and building a neural/semantic information retrieval system
Fine-tuning GPT3 with custom examples
Prompt engineering with GPT3 and ChatGPT
Advanced prompt engineering techniques
Building a recommendation engine
Combining Transformers
Deploying custom LLMs to the cloud
Table of contents
- Cover Page
- About This eBook
- Halftitle Page
- Title Page
- Copyright Page
- Pearson’s Commitment to Diversity, Equity, and Inclusion
- Contents
- Foreword
- Preface
- Acknowledgments
- About the Author
- I: Introduction to Large Language Models
- II: Getting the Most Out of LLMs
- III: Advanced LLM Usage
-
IV: Appendices
-
A. LLM FAQs
- The LLM already knows about the domain I’m working in. Why should I add any grounding?
- I just want to deploy a closed-source API. What are the main things I need to look out for?
- I really want to deploy an open-source model. What are the main things I need to look out for?
- Creating and fine-tuning my own model architecture seems hard. What can I do to make it easier?
- I think my model is susceptible to prompt injections or going off task. How do I correct it?
- Why didn’t we talk about third-party LLM tools like LangChain?
- How do I deal with overfitting or underfitting in LLMs?
- How can I use LLMs for non-English languages? Are there any unique challenges?
- How can I implement real-time monitoring or logging to understand the performance of my deployed LLM better?
- What are some things we didn’t talk about in this book?
-
B. LLM Glossary
- Transformer Architecture
- Attention Mechanism
- Large Language Model (LLM)
- Autoregressive Language Models
- Autoencoding Language Models
- Transfer Learning
- Prompt Engineering
- Alignment
- Reinforcement Learning from Human Feedback (RLHF)
- Reinforcement Learning from AI Feedback (RLAIF)
- Corpora
- Fine-Tuning
- Labeled Data
- Hyperparameters
- Learning Rate
- Batch Size
- Training Epochs
- Evaluation Metrics
- Incremental/Online Learning
- Overfitting
- Underfitting
-
C. LLM Application Archetypes
- Chatbots/Virtual Assistants
- Fine-Tuning a Closed-Source LLM
- Fine-Tuning an Open-Source LLM
- Fine-Tuning a Bi-encoder to Learn New Embeddings
- Fine-Tuning an LLM for Following Instructions Using Both LM Training and Reinforcement Learning from Human / AI Feedback (RLHF & RLAIF)
- Open-Book Question-Answering
-
A. LLM FAQs
- Index
- Permissions and Image Credits
- Code Snippets
Product information
- Title: Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs
- Author(s):
- Release date: October 2023
- Publisher(s): Addison-Wesley Professional
- ISBN: 9780138199425
You might also like
book
Hands-On Large Language Models
AI has acquired startling new language capabilities in just the past few years. Driven by rapid …
book
Developing Apps with GPT-4 and ChatGPT
This minibook is a comprehensive guide for Python developers who want to learn how to build …
book
Designing Data-Intensive Applications
Data is at the center of many challenges in system design today. Difficult issues need to …
book
Modern Generative AI with ChatGPT and OpenAI Models
Harness the power of AI with innovative, real-world applications, and unprecedented productivity boosts, powered by the …