Inside Deep Learning, Video Edition

Video description

In Video Editions the narrator reads the book while the content, figures, code listings, diagrams, and text appear on the screen. Like an audiobook that you can also watch as a video.

Journey through the theory and practice of modern deep learning, and apply innovative techniques to solve everyday data problems.

In Inside Deep Learning, you will learn how to:

  • Implement deep learning with PyTorch
  • Select the right deep learning components
  • Train and evaluate a deep learning model
  • Fine tune deep learning models to maximize performance
  • Understand deep learning terminology
  • Adapt existing PyTorch code to solve new problems

Inside Deep Learning is an accessible guide to implementing deep learning with the PyTorch framework. It demystifies complex deep learning concepts and teaches you to understand the vocabulary of deep learning so you can keep pace in a rapidly evolving field. No detail is skipped—you’ll dive into math, theory, and practical applications. Everything is clearly explained in plain English.

About the Technology
Deep learning doesn’t have to be a black box! Knowing how your models and algorithms actually work gives you greater control over your results. And you don’t have to be a mathematics expert or a senior data scientist to grasp what’s going on inside a deep learning system. This book gives you the practical insight you need to understand and explain your work with confidence.

About the Book
Inside Deep Learning illuminates the inner workings of deep learning algorithms in a way that even machine learning novices can understand. You’ll explore deep learning concepts and tools through plain language explanations, annotated code, and dozens of instantly useful PyTorch examples. Each type of neural network is clearly presented without complex math, and every solution in this book can run using readily available GPU hardware!

What's Inside
  • Select the right deep learning components
  • Train and evaluate a deep learning model
  • Fine tune deep learning models to maximize performance
  • Understand deep learning terminology


About the Reader
For Python programmers with basic machine learning skills.

About the Author
Edward Raff is a Chief Scientist at Booz Allen Hamilton, and the author of the JSAT machine learning library.

Quotes
Pick up this book, and you won’t be able to put it down. A rich, engaging knowledge base of deep learning math, algorithms, and models—just like the title says!
- From the Foreword by Kirk Borne Ph.D., Chief Science Officer, DataPrime.ai

The clearest and easiest book for learning deep learning principles and techniques I have ever read. The graphical representations for the algorithms are an eye-opening revelation.
- Richard Vaughan, Purple Monkey Collective

A great read for anyone interested in understanding the details of deep learning.
- Vishwesh Ravi Shrimali, MBRDI

Table of contents

  1. Part 1. Foundational methods
  2. Chapter 1. The mechanics of learning
  3. Chapter 1. The world as tensors
  4. Chapter 1. Automatic differentiation
  5. Chapter 1. Optimizing parameters
  6. Chapter 1. Loading dataset objects
  7. Chapter 1. Summary
  8. Chapter 2. Fully connected networks
  9. Chapter 2. Building our first neural network
  10. Chapter 2. Classification problems
  11. Chapter 2. Better training code
  12. Chapter 2. Training in batches
  13. Chapter 2. Summary
  14. Chapter 3. Convolutional neural networks
  15. Chapter 3. What are convolutions?
  16. Chapter 3. How convolutions benefit image processing
  17. Chapter 3. Putting it into practice: Our first CNN
  18. Chapter 3. Adding pooling to mitigate object movement
  19. Chapter 3. Data augmentation
  20. Chapter 3. Summary
  21. Chapter 4. Recurrent neural networks
  22. Chapter 4. RNNs in PyTorch
  23. Chapter 4. Improving training time with packing
  24. Chapter 4. More complex RNNs
  25. Chapter 4. Summary
  26. Chapter 5. Modern training techniques
  27. Chapter 5. Learning rate schedules
  28. Chapter 5. Making better use of gradients
  29. Chapter 5. Hyperparameter optimization with Optuna
  30. Chapter 5. Summary
  31. Chapter 6. Common design building blocks
  32. Chapter 6. Normalization layers: Magically better convergence
  33. Chapter 6. Skip connections: A network design pattern
  34. Chapter 6. 1 × 1 Convolutions: Sharing and reshaping information in channels
  35. Chapter 6. Residual connections
  36. Chapter 6. Long short-term memory RNNs
  37. Chapter 6. Summary
  38. Part 2. Building advanced networks
  39. Chapter 7. Autoencoding and self-supervision
  40. Chapter 7. Designing autoencoding neural networks
  41. Chapter 7. Bigger autoencoders
  42. Chapter 7. Denoising autoencoders
  43. Chapter 7. Autoregressive models for time series and sequences
  44. Chapter 7. Summary
  45. Chapter 8. Object detection
  46. Chapter 8. Transposed convolutions for expanding image size
  47. Chapter 8. U-Net: Looking at fine and coarse details
  48. Chapter 8. Object detection with bounding boxes
  49. Chapter 8. Using the pretrained Faster R-CNN
  50. Chapter 8. Summary
  51. Chapter 9. Generative adversarial networks
  52. Chapter 9. Mode collapse
  53. Chapter 9. Wasserstein GAN: Mitigating mode collapse
  54. Chapter 9. Convolutional GAN
  55. Chapter 9. Conditional GAN
  56. Chapter 9. Walking the latent space of GANs
  57. Chapter 9. Ethics in deep learning
  58. Chapter 9. Summary
  59. Chapter 10. Attention mechanisms
  60. Chapter 10. Adding some context
  61. Chapter 10. Putting it all together: A complete attention mechanism with context
  62. Chapter 10. Summary
  63. Chapter 11. Sequence-to-sequence
  64. Chapter 11. Machine translation and the data loader
  65. Chapter 11. Inputs to Seq2Seq
  66. Chapter 11. Seq2Seq with attention
  67. Chapter 11. Summary
  68. Chapter 12. Network design alternatives to RNNs
  69. Chapter 12. Averaging embeddings over time
  70. Chapter 12. Pooling over time and 1D CNNs
  71. Chapter 12. Positional embeddings add sequence information to any model
  72. Chapter 12. Transformers: Big models for big data
  73. Chapter 12. Summary
  74. Chapter 13. Transfer learning
  75. Chapter 13. Transfer learning and training with CNNs
  76. Chapter 13. Learning with fewer labels
  77. Chapter 13. Pretraining with text
  78. Chapter 13. Summary
  79. Chapter 14. Advanced building blocks
  80. Chapter 14. Improved residual blocks
  81. Chapter 14. MixUp training reduces overfitting
  82. Chapter 14. Summary
  83. Appendix. Setting up Colab

Product information

  • Title: Inside Deep Learning, Video Edition
  • Author(s): Edward Raff
  • Release date: June 2022
  • Publisher(s): Manning Publications
  • ISBN: None