Book description
Recent advances in machine learning have lowered the barriers to creating and using ML models. But understanding what these models are doing has only become more difficult. We discuss technological advances with little understanding of how they work and struggle to develop a comfortable intuition for new functionality.
In this report, authors Austin Eovito and Marina Danilevsky from IBM focus on how to think about neural network-based language model architectures. They guide you through various models (neural networks, RNN/LSTM, encoder-decoder, attention/transformers) to convey a sense of their abilities without getting entangled in the complex details. The report uses simple examples of how humans approach language in specific applications to explore and compare how different neural network-based language models work.
This report will empower you to better understand how machines understand language.
- Dive deep into the basic task of a language model to predict the next word, and use it as a lens to understand neural network language models
- Explore encoder-decoder architecture through abstractive text summarization
- Use machine translation to understand the attention mechanism and transformer architecture
- Examine the current state of machine language understanding to discern what these language models are good at and their risks and weaknesses
Table of contents
- 1. Introduction: What Is It like to Be a Language Model?
- 2. Meet the Neural Model Family
- 3. Two Heads Are Better than One: Encoder-Decoder Architecture
- 4. Choosing What to Care About: Attention and Transformers
- 5. Machine Language Understanding
- About the Authors
Product information
- Title: Language Models in Plain English
- Author(s):
- Release date: October 2021
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 9781098109066
You might also like
book
Designing Large Language Model Applications
Transformer-based language models are powerful tools for solving a variety of language tasks and represent a …
book
Hands-On Large Language Models
AI has acquired startling new language capabilities in just the past few years. Driven by rapid …
book
Natural Language Processing with Transformers, Revised Edition
Since their introduction in 2017, transformers have quickly become the dominant architecture for achieving state-of-the-art results …
book
Practical Natural Language Processing
Many books and courses tackle natural language processing (NLP) problems with toy use cases and well-defined …