Machine Learning with TensorFlow, Second Edition

Book description

Updated with new code, new projects, and new chapters, Machine Learning with TensorFlow, Second Edition gives readers a solid foundation in machine-learning concepts and the TensorFlow library. Written by NASA JPL Deputy CTO and Principal Data Scientist Chris Mattmann, all examples are accompanied by downloadable Jupyter Notebooks for a hands-on experience coding TensorFlow with Python. New and revised content expands coverage of core machine learning algorithms, and advancements in neural networks such as VGG-Face facial identification classifiers and deep speech classifiers.

About the Technology
Supercharge your data analysis with machine learning! ML algorithms automatically improve as they process data, so results get better over time. You don’t have to be a mathematician to use ML: Tools like Google’s TensorFlow library help with complex calculations so you can focus on getting the answers you need.

About the Book
Machine Learning with TensorFlow, Second Edition is a fully revised guide to building machine learning models using Python and TensorFlow. You’ll apply core ML concepts to real-world challenges, such as sentiment analysis, text classification, and image recognition. Hands-on examples illustrate neural network techniques for deep speech processing, facial identification, and auto-encoding with CIFAR-10.

What's Inside
  • Machine Learning with TensorFlow
  • Choosing the best ML approaches
  • Visualizing algorithms with TensorBoard
  • Sharing results with collaborators
  • Running models in Docker


About the Reader
Requires intermediate Python skills and knowledge of general algebraic concepts like vectors and matrices. Examples use the super-stable 1.15.x branch of TensorFlow and TensorFlow 2.x.

About the Author
Chris Mattmann is the Division Manager of the Artificial Intelligence, Analytics, and Innovation Organization at NASA Jet Propulsion Lab. The first edition of this book was written by Nishant Shukla with Kenneth Fricklas.

Quotes
A practical, no-nonsense, original approach to machine learning.
- Alain Couniot, Sopra Steria Benelux

An excellent book for readers who want to learn TensorFlow and machine learning.
- Bhagvan Kommadi, ValueMomentum

A great way to learn the ins and outs of TensorFlow, from the fundamentals to autoencoders, CNNs, and sequence-to-sequence models.
- Ariel Gamiño, GLG

Full of practical examples illustrating the concepts in a clear, progressive approach. This book is worth your while!
- Alain Lompo, ISO-GRUPPE

Table of contents

  1. Machine Learning with TensorFlow, 2e
  2. Copyright
  3. dedication
  4. Praise for the First Edition
  5. front matter
    1. foreword
    2. preface
    3. acknowledgments
    4. about this book
    5. How this book is organized: A roadmap
    6. About the code
    7. liveBook discussion forum
    8. about the author
    9. about the cover illustration
  6. contents
  7. Part 1 Your machine-learning rig
  8. 1 A machine-learning odyssey
    1. 1.1 Machine-learning fundamentals
      1. 1.1.1 Parameters
      2. 1.1.2 Learning and inference
    2. 1.2 Data representation and features
    3. 1.3 Distance metrics
    4. 1.4 Types of learning
      1. 1.4.1 Supervised learning
      2. 1.4.2 Unsupervised learning
      3. 1.4.3 Reinforcement learning
      4. 1.4.4 Meta-learning
    5. 1.5 TensorFlow
    6. 1.6 Overview of future chapters
    7. Summary
  9. 2 TensorFlow essentials
    1. 2.1 Ensuring that TensorFlow works
    2. 2.2 Representing tensors
    3. 2.3 Creating operators
    4. 2.4 Executing operators within sessions
    5. 2.5 Understanding code as a graph
      1. 2.5.1 Setting session configurations
    6. 2.6 Writing code in Jupyter
    7. 2.7 Using variables
    8. 2.8 Saving and loading variables
    9. 2.9 Visualizing data using TensorBoard
      1. 2.9.1 Implementing a moving average
      2. 2.9.2 Visualizing the moving average
    10. 2.10 Putting it all together: The TensorFlow system architecture and API
    11. Summary
  10. Part 2 Core learning algorithms
  11. 3 Linear regression and beyond
    1. 3.1 Formal notation
      1. 3.1.1 How do you know the regression algorithm is working?
    2. 3.2 Linear regression
    3. 3.3 Polynomial model
    4. 3.4 Regularization
    5. 3.5 Application of linear regression
    6. Summary
  12. 4 Using regression for call-center volume prediction
    1. 4.1 What is 311?
    2. 4.2 Cleaning the data for regression
    3. 4.3 What’s in a bell curve? Predicting Gaussian distributions
    4. 4.4 Training your call prediction regressor
    5. 4.5 Visualizing the results and plotting the error
    6. 4.6 Regularization and training test splits
    7. Summary
  13. 5 A gentle introduction to classification
    1. 5.1 Formal notation
    2. 5.2 Measuring performance
      1. 5.2.1 Accuracy
      2. 5.2.2 Precision and recall
      3. 5.2.3 Receiver operating characteristic curve
    3. 5.3 Using linear regression for classification
    4. 5.4 Using logistic regression
      1. 5.4.1 Solving 1D logistic regression
      2. 5.4.2 Solving 2D regression
    5. 5.5 Multiclass classifier
      1. 5.5.1 One-versus-all
      2. 5.5.2 One-versus-one
      3. 5.5.3 Softmax regression
    6. 5.6 Application of classification
    7. Summary
  14. 6 Sentiment classification: Large movie-review dataset
    1. 6.1 Using the Bag of Words model
      1. 6.1.1 Applying the Bag of Words model to movie reviews
      2. 6.1.2 Cleaning all the movie reviews
      3. 6.1.3 Exploratory data analysis on your Bag of Words
    2. 6.2 Building a sentiment classifier using logistic regression
      1. 6.2.1 Setting up the training for your model
      2. 6.2.2 Performing the training for your model
    3. 6.3 Making predictions using your sentiment classifier
    4. 6.4 Measuring the effectiveness of your classifier
    5. 6.5 Creating the softmax-regression sentiment classifier
    6. 6.6 Submitting your results to Kaggle
    7. Summary
  15. 7 Automatically clustering data
    1. 7.1 Traversing files in TensorFlow
    2. 7.2 Extracting features from audio
    3. 7.3 Using k-means clustering
    4. 7.4 Segmenting audio
    5. 7.5 Clustering with a self-organizing map
    6. 7.6 Applying clustering
    7. Summary
  16. 8 Inferring user activity from Android accelerometer data
    1. 8.1 The User Activity from Walking dataset
      1. 8.1.1 Creating the dataset
      2. 8.1.2 Computing jerk and extracting the feature vector
    2. 8.2 Clustering similar participants based on jerk magnitudes
    3. 8.3 Different classes of user activity for a single participant
    4. Summary
  17. 9 Hidden Markov models
    1. 9.1 Example of a not-so-interpretable model
    2. 9.2 Markov model
    3. 9.3 Hidden Markov model
    4. 9.4 Forward algorithm
    5. 9.5 Viterbi decoding
    6. 9.6 Uses of HMMs
      1. 9.6.1 Modeling a video
      2. 9.6.2 Modeling DNA
      3. 9.6.3 Modeling an image
    7. 9.7 Application of HMMs
    8. Summary
  18. 10 Part-of-speech tagging and word-sense disambiguation
    1. 10.1 Review of HMM example: Rainy or Sunny
    2. 10.2 PoS tagging
      1. 10.2.1 The big picture: Training and predicting PoS with HMMs
      2. 10.2.2 Generating the ambiguity PoS tagged dataset
    3. 10.3 Algorithms for building the HMM for PoS disambiguation
      1. 10.3.1 Generating the emission probabilities
    4. 10.4 Running the HMM and evaluating its output
    5. 10.5 Getting more training data from the Brown Corpus
    6. 10.6 Defining error bars and metrics for PoS tagging
    7. Summary
  19. Part 3 The neural network paradigm
  20. 11 A peek into autoencoders
    1. 11.1 Neural networks
    2. 11.2 Autoencoders
    3. 11.3 Batch training
    4. 11.4 Working with images
    5. 11.5 Application of autoencoders
    6. Summary
  21. 12 Applying autoencoders: The CIFAR-10 image dataset
    1. 12.1 What is CIFAR-10?
      1. 12.1.1 Evaluating your CIFAR-10 autoencoder
    2. 12.2 Autoencoders as classifiers
      1. 12.2.1 Using the autoencoder as a classifier via loss
    3. 12.3 Denoising autoencoders
    4. 12.4 Stacked deep autoencoders
    5. Summary
  22. 13 Reinforcement learning
    1. 13.1 Formal notions
      1. 13.1.1 Policy
      2. 13.1.2 Utility
    2. 13.2 Applying reinforcement learning
    3. 13.3 Implementing reinforcement learning
    4. 13.4 Exploring other applications of reinforcement learning
    5. Summary
  23. 14 Convolutional neural networks
    1. 14.1 Drawback of neural networks
    2. 14.2 Convolutional neural networks
    3. 14.3 Preparing the image
      1. 14.3.1 Generating filters
      2. 14.3.2 Convolving using filters
      3. 14.3.3 Max pooling
    4. 14.4 Implementing a CNN in TensorFlow
      1. 14.4.1 Measuring performance
      2. 14.4.2 Training the classifier
    5. 14.5 Tips and tricks to improve performance
    6. 14.6 Application of CNNs
    7. Summary
  24. 15 Building a real-world CNN: VGG -Face and VGG -Face Lite
    1. 15.1 Making a real-world CNN architecture for CIFAR-10
      1. 15.1.1 Loading and preparing the CIFAR-10 image data
      2. 15.1.2 Performing data augmentation
    2. 15.2 Building a deeper CNN architecture for CIFAR-10
      1. 15.2.1 CNN optimizations for increasing learned parameter resilience
    3. 15.3 Training and applying a better CIFAR-10 CNN
    4. 15.4 Testing and evaluating your CNN for CIFAR-10
      1. 15.4.1 CIFAR-10 accuracy results and ROC curves
      2. 15.4.2 Evaluating the softmax predictions per class
    5. 15.5 Building VGG -Face for facial recognition
      1. 15.5.1 Picking a subset of VGG -Face for training VGG -Face Lite
      2. 15.5.2 TensorFlow’s Dataset API and data augmentation
      3. 15.5.3 Creating a TensorFlow dataset
      4. 15.5.4 Training using TensorFlow datasets
      5. 15.5.5 VGG -Face Lite model and training
      6. 15.5.6 Training and evaluating VGG -Face Lite
      7. 15.5.7 Evaluating and predicting with VGG -Face Lite
    6. Summary
  25. 16 Recurrent neural networks
    1. 16.1 Introduction to RNNs
    2. 16.2 Implementing a recurrent neural network
    3. 16.3 Using a predictive model for time-series data
    4. 16.4 Applying RNNs
    5. Summary
  26. 17 LSTMs and automatic speech recognition
    1. 17.1 Preparing the LibriSpeech corpus
      1. 17.1.1 Downloading, cleaning, and preparing LibriSpeech OpenSLR data
      2. 17.1.2 Converting the audio
      3. 17.1.3 Generating per-audio transcripts
      4. 17.1.4 Aggregating audio and transcripts
    2. 17.2 Using the deep-speech model
      1. 17.2.1 Preparing the input audio data for deep speech
      2. 17.2.2 Preparing the text transcripts as character-level numerical data
      3. 17.2.3 The deep-speech model in TensorFlow
      4. 17.2.4 Connectionist temporal classification in TensorFlow
    3. 17.3 Training and evaluating deep speech
    4. Summary
  27. 18 Sequence-to-sequence models for chatbots
    1. 18.1 Building on classification and RNNs
    2. 18.2 Understanding seq2seq architecture
    3. 18.3 Vector representation of symbols
    4. 18.4 Putting it all together
    5. 18.5 Gathering dialogue data
    6. Summary
  28. 19 Utility landscape
    1. 19.1 Preference model
    2. 19.2 Image embedding
    3. 19.3 Ranking images
    4. Summary
    5. What’s next
  29. appendix Installation instructions
    1. A.1 Installing the book’s code with Docker
      1. A.1.1 Installing Docker in Windows
      2. A.1.2 Installing Docker in Linux
      3. A.1.3 Installing Docker in macOS
      4. A.1.4 Using Docker
    2. A.2 Getting the data and storing models
    3. A.3 Necessary libraries
    4. A.4 Converting the call-center example to TensorFlow2
      1. A.4.1 The call-center example with TF2
  30. index

Product information

  • Title: Machine Learning with TensorFlow, Second Edition
  • Author(s): Chris Mattmann
  • Release date: January 2021
  • Publisher(s): Manning Publications
  • ISBN: 9781617297717