Deep Learning with R, Second Edition

Book description

Deep learning from the ground up using R and the powerful Keras library!

In Deep Learning with R, Second Edition you will learn:

  • Deep learning from first principles
  • Image classification and image segmentation
  • Time series forecasting
  • Text classification and machine translation
  • Text generation, neural style transfer, and image generation
Deep Learning with R, Second Edition shows you how to put deep learning into action. It’s based on the revised new edition of François Chollet’s bestselling Deep Learning with Python. All code and examples have been expertly translated to the R language by Tomasz Kalinowski, who maintains the Keras and Tensorflow R packages at RStudio. Novices and experienced ML practitioners will love the expert insights, practical techniques, and important theory for building neural networks.

About the Technology
Deep learning has become essential knowledge for data scientists, researchers, and software developers. The R language APIs for Keras and TensorFlow put deep learning within reach for all R users, even if they have no experience with advanced machine learning or neural networks. This book shows you how to get started on core DL tasks like computer vision, natural language processing, and more using R.

About the Book
Deep Learning with R, Second Edition is a hands-on guide to deep learning using the R language. As you move through this book, you’ll quickly lock in the foundational ideas of deep learning. The intuitive explanations, crisp illustrations, and clear examples guide you through core DL skills like image processing and text manipulation, and even advanced features like transformers. This revised and expanded new edition is adapted from Deep Learning with Python, Second Edition by François Chollet, the creator of the Keras library.

What's Inside
  • Image classification and image segmentation
  • Time series forecasting
  • Text classification and machine translation
  • Text generation, neural style transfer, and image generation


About the Reader
For readers with intermediate R skills. No previous experience with Keras, TensorFlow, or deep learning is required.

About the Authors
François Chollet is a software engineer at Google and creator of Keras. Tomasz Kalinowski is a software engineer at RStudio and maintainer of the Keras and Tensorflow R packages. J.J. Allaire is the founder of RStudio, and the author of the first edition of this book.

Quotes
A must-have for scientists and technicians who want to expand their knowledge.
- Fernando García Sedano, Grupo Epelsa

Whether you are new to deep learning or wanting to expand your applications in R, there is no better guide.
- Michael Petrey, Boxplot Analytics

The clear illustrations and insightful examples are helpful to anybody, from beginners to experienced deep learning practitioners.
- Edward Lee, Yale University

Outstandingly well written.
- Shahnawaz Ali, King’s College London

Publisher resources

View/Submit Errata

Table of contents

  1. Cover
  2. Title Page
  3. Copyright
  4. Preface
  5. Acknowledgments
  6. About This Book
  7. About the Authors
  8. Chapter 1: What Is Deep Learning?
    1. 1.1 Artificial Intelligence, Machine Learning, and Deep Learning
      1. Artificial Intelligence
      2. Machine Learning
      3. Learning Rules and Representations from Data
      4. The “Deep” in “Deep Learning”
      5. Understanding How Deep Learning Works, in Three Figures
      6. What Deep Learning Has Achieved So Far
      7. Don’t Believe the Short-Term Hype
      8. The Promise of AI
    2. 1.2 Before Deep Learning: A Brief History of Machine Learning
      1. Probabilistic Modeling
      2. Early Neural Networks
      3. Kernel Methods
      4. Decision Trees, Random Forests, and Gradient-Boosting Machines
      5. Back to Neural Networks
      6. What Makes Deep Learning Different?
      7. The Modern Machine Learning Landscape
    3. 1.3 Why Deep Learning? Why Now?
      1. Hardware
      2. Data
      3. Algorithms
      4. A New Wave of Investment
      5. The Democratization of Deep Learning
      6. Will It Last?
  9. Chapter 2: The Mathematical Building Blocks of Neural Networks
    1. 2.1 A First Look at a Neural Network
    2. 2.2 Data Representations for Neural Networks
      1. Scalars (Rank 0 Tensors)
      2. Vectors (Rank 1 Tensors)
      3. Matrices (Rank 2 Tensors)
      4. Rank 3 and Higher-Rank Tensors
      5. Key Attributes
      6. Manipulating Tensors in R
      7. The Notion of Data Batches
      8. Real-World Examples of Data Tensors
      9. Vector Data
      10. Time-Series Data or Sequence Data
      11. Image Data
      12. Video Data
    3. 2.3 The Gears of Neural Networks: Tensor Operations
      1. Element-Wise Operations
      2. Broadcasting
      3. Tensor Product
      4. Tensor Reshaping
      5. Geometric Interpretation of Tensor Operations
      6. A Geometric Interpretation of Deep Learning
    4. 2.4 The Engine of Neural Networks: Gradient-Based Optimization
      1. What’s a Derivative?
      2. Derivative of a Tensor Operation: The gradient
      3. Stochastic Gradient Descent
      4. Chaining Derivatives: The Backpropagation Algorithm
    5. 2.5 Looking Back at Our First Example
      1. Reimplementing our First Example from Scratch in TensorFlow
      2. Running One Training Step
      3. The Full Training Loop
      4. Evaluating the Model
  10. Chapter 3: Introduction to Keras and Tensorflow
    1. 3.1 What’s Tensorflow?
    2. 3.2 What’s Keras?
    3. 3.3 Keras and Tensorflow: A Brief History
    4. 3.4 Python and R Interfaces: A Brief History
    5. 3.5 Setting Up A Deep Learning Workspace
      1. Installing Keras and TensorFlow
    6. 3.6 First Steps with Tensorflow
      1. Tensorflow Tensors
    7. 3.7 Tensor Attributes
      1. Tensor Shape and Reshaping
      2. Tensor Slicing
      3. Tensor Broadcasting
      4. The tf Module
      5. Constant Tensors and Variables
      6. Tensor Operations: Doing Math in TensorFlow
      7. A Second Look at the GradientTape API
      8. An End-to-End Example: A Linear Classifier in Pure TensorFlow
    8. 3.8 Anatomy of a Neural Network: Understanding Core Keras Apis
      1. Layers: The Building Blocks of Deep Learning
      2. From Layers to Models
      3. The “Compile” Step: Configuring the Learning Process
      4. Picking a Loss Function
      5. Understanding the fit() Method
      6. Monitoring Loss and Metrics on Validation Data
      7. Inference: Using a Model After Training
  11. Chapter 4: Getting Started with Neural Networks: Classification and Regression
    1. 4.1 Classifying Movie Reviews: A Binary Classification Example
      1. The IMDB Dataset
      2. Preparing the Data
      3. Building Your Model
      4. Validating Your Approach
      5. Using a Trained Model to Generate Predictions on New Data
      6. Further Experiments
      7. Wrapping Up
    2. 4.2 Classifying Newswires: A Multiclass Classification Example
      1. The Reuters Dataset
      2. Preparing the Data
      3. Building Your Model
      4. Validating Your Approach
      5. Generating Predictions on New Data
      6. A Different Way to Handle the Labels and the Loss
      7. The Importance of Having Sufficiently Large Intermediate Layers
      8. Further Experiments
      9. Wrapping Up
    3. 4.3 Predicting House Prices: A Regression Example
      1. The Boston Housing Price Dataset
      2. Preparing the Data 123 Building Your Model
      3. Validating Your Approach Using K-fold Validation
      4. Generating Predictions on New Data
      5. Wrapping Up
  12. Chapter 5: Fundamentals of Machine Learning
    1. 5.1 Generalization: The Goal of Machine Learning
      1. Underfitting and Overfitting
      2. The Nature of Generalization in Deep Learning
    2. 5.2 Evaluating Machine Learning Models
      1. Training, Validation, and Test Sets
      2. Beating a Common-Sense Baseline
      3. Things to Keep in Mind About Model Evaluation
    3. 5.3 Improving Model Fit
      1. Tuning Key Gradient Descent Parameters
      2. Leveraging Better Architecture Priors
      3. Increasing Model Capacity
    4. 5.4 Improving Generalization
      1. Dataset Curation
      2. Feature Engineering
      3. Using Early Stopping
      4. Regularizing Your Model
  13. Chapter 6: The Universal Workflow of Machine Learning
    1. 6.1 Define The Task
      1. Frame the Problem
      2. Collect a Dataset
      3. Understand Your Data
      4. Choose a Measure of Success
    2. 6.2 Develop a Model
      1. Prepare the Data
      2. Choose an Evaluation Protocol
      3. Beat a Baseline
      4. Scale Up: Develop a Model that Overfits
      5. Regularize and Tune Your Model
    3. 6.3 Deploy the Model
      1. Explain Your Work to Stakeholders and Set Expectations
      2. Ship an Inference Model
      3. Monitor Your Model in the Wild
      4. Maintain Your Model
  14. Chapter 7: Working with Keras: A Deep Dive
    1. 7.1 A Spectrum of Workflows
    2. 7.2 Different Ways to Build Keras Models
      1. The Sequential Model
      2. The Functional API
      3. Subclassing the Model Class
      4. Mixing and Matching Different Components
      5. Remember: Use the Right Tool for the Job
    3. 7.3 Using Built-In Training and Evaluation Loops
      1. Writing Your Own Metrics
      2. Using Callbacks
      3. Writing Your Own Callbacks
      4. Monitoring and Visualization with Tensorboard
    4. 7.4 Writing Your Own Training and Evaluation Loops
      1. Training vs. Inference
      2. Low-Level Usage of Metrics
      3. A Complete Training and Evaluation Loop
      4. Make It Fast with tf_function()
      5. Leveraging fit() with a Custom Training Loop
  15. Chapter 8: Introduction to Deep Learning for Computer Vision
    1. 8.1 Introduction to Convnets
      1. The Convolution Operation
      2. The Max-Pooling Operation
    2. 8.2 Training a Convnet from Scratch on a Small Dataset
      1. The Relevance of Deep Learning for Small Data Problems
      2. Downloading the Data
      3. Building the Model
      4. Data Preprocessing
      5. Using Data Augmentation
    3. 8.3 Leveraging a Pretrained Model
      1. Feature Extraction with a Pretrained Model
      2. Fine-Tuning a Pretrained Model
  16. Chapter 9: Advanced Deep Learning for Computer Vision
    1. 9.1 Three Essential Computer Vision Tasks
    2. 9.2 An Image Segmentation Example
    3. 9.3 Modern Convnet Architecture Patterns
      1. Modularity, Hierarchy, and Reuse
      2. Residual Connections
      3. Batch Normalization
      4. Depthwise Separable Convolutions
      5. Putting It Together: A Mini Xception-Like Model
    4. 9.4 Interpreting What Convnets Learn
      1. Visualizing Intermediate Activations
      2. Visualizing Convnet Filters
      3. Visualizing Heatmaps of Class Activation
  17. Chapter 10: Deep Learning for Time Series
    1. 10.1 Different Kinds of Time-Series Tasks
    2. 10.2 A Temperature-Forecasting Example
      1. Preparing the Data
      2. A Common-Sense, Non–Machine Learning Baseline
      3. Let’s Try a Basic Machine Learning Model
      4. Let’s Try a 1D Convolutional Model
      5. A First Recurrent Baseline
    3. 10.3 Understanding Recurrent Neural Networks
      1. A Recurrent Layer in Keras
    4. 10.4 Advanced Use of Recurrent Neural Networks
      1. Using Recurrent Dropout to Fight Overfitting
      2. Stacking Recurrent Layers
      3. Using Bidirectional RNNs
      4. Going Even Further
  18. Chapter 11: Deep Learning for Text
    1. 11.1 Natural Language Processing: The Bird’s-Eye View
    2. 11.2 Preparing Text Data
      1. Text Standardization
      2. Text Splitting (Tokenization)
      3. Vocabulary Indexing
      4. Using layer_text_vectorization
    3. 11.3 Two Approaches for Representing Groups of Words: Sets and Sequences
      1. Preparing the IMDB Movie Reviews Data
      2. Processing Words as a Set: The Bag-of-Words Approach
      3. Processing Words as a Sequence: The Sequence Model Approach
    4. 11.4 The Transformer Architecture
      1. Understanding Self-Attention
      2. Multi-Head Attention
      3. The Transformer Encoder
      4. When to Use Sequence Models Over Bag-of-Words Models
    5. 11.5 Beyond Text Classification: Sequence-to-Sequence Learning
      1. A machine Translation Example
      2. Sequence-to-Sequence Learning with RNNs
      3. Sequence-to-Sequence Learning with Transformer
  19. Chapter 12: Generative Deep Learning
    1. 12.1 Text Generation
      1. A Brief History of Generative Deep Learning for Sequence Generation
      2. How do you Generate Sequence Data?
      3. The Importance of the Sampling Strategy
      4. Implementing Text Generation with Keras
      5. A Text-Generation Callback with Variable-Temperature Sampling
      6. Wrapping Up
    2. 12.2 Deepdream
      1. Implementing Deepdream in Keras
      2. Wrapping Up
    3. 12.3 Neural Style Transfer
      1. The Content Loss
      2. The Style Loss
      3. Neural Style Transfer in Keras
      4. Wrapping Up
    4. 12.4 Generating Images with Variational Autoencoders
      1. Sampling from Latent Spaces of Images
      2. Concept Vectors for Image Editing
      3. Variational Autoencoders
      4. Implementing a VAE with Keras
      5. Wrapping Up
    5. 12.5 Introduction to Generative Adversarial Networks
      1. A Schematic GAN Implementation
      2. A Bag of Tricks
      3. Getting Our Hands on the Celeba Dataset
      4. The Discriminator
      5. The Generator
      6. The Adversarial Network
      7. Wrapping Up
  20. Chapter 13: Best Practices for the Real World
    1. 13.1 Getting the Most Out of Your Models
      1. Hyperparameter Optimization
      2. Model Ensembling
    2. 13.2 Scaling-Up Model Training
      1. Speeding Up Training on GPU with Mixed Precision
      2. Multi-GPU Training
      3. TPU Training
  21. Chapter 14: Conclusions
    1. 14.1 Key Concepts in Review
      1. Various Approaches to AI
      2. What Makes Deep Learning Special within the Field of Machine Learning
      3. How to Think About Deep Learning
      4. Key Enabling Technologies
      5. The Universal Machine Learning Workflow
      6. Key Network Architectures
      7. The Space of Possibilities
    2. 14.2 The Limitations of Deep Learning
      1. The Risk of Anthropomorphizing Machine Learning Models
      2. Automatons vs. Intelligent Agents
      3. Local Generalization vs. Extreme Generalization
      4. The Purpose of Intelligence
      5. Climbing the Spectrum of Generalization
    3. 14.3 Setting The Course Toward Greater Generality in AI
      1. On the Importance of Setting the Right Objective: The Shortcut Rule
      2. A New Target
    4. 14.4 Implementing Intelligence: The Missing Ingredients
      1. Intelligence as Sensitivity to Abstract Analogies
      2. The Two Poles of Abstraction
      3. The Two Poles of Abstraction
      4. The Missing Half of the Picture
    5. 14.5 The Future of Deep Learning
      1. Models as Programs
      2. Machine Learning vs. Program Synthesis
      3. Blending Together Deep Learning and Program Synthesis
      4. Lifelong Learning and Modular Subroutine Reuse
      5. The Long-Term Vision
    6. 14.6 Staying Up-to-Date in a Fast-Moving Field
      1. Practice on Real-World Problems Using Kaggle
      2. Read About the Latest Developments on arXiv
      3. Explore the Keras Ecosystem
    7. 14.7 Final Words
  22. Appendix: Python Primer for R Users
  23. Index

Product information

  • Title: Deep Learning with R, Second Edition
  • Author(s): Sigrid Keydana, Francois Chollet, Tomasz Kalinowski, J.J. Allaire
  • Release date: October 2022
  • Publisher(s): Manning Publications
  • ISBN: 9781633439849