Hands-On One-shot Learning with Python

Book description

Get to grips with building powerful deep learning models using PyTorch and scikit-learn

Key Features

  • Learn how you can speed up the deep learning process with one-shot learning
  • Use Python and PyTorch to build state-of-the-art one-shot learning models
  • Explore architectures such as Siamese networks, memory-augmented neural networks, model-agnostic meta-learning, and discriminative k-shot learning

Book Description

One-shot learning has been an active field of research for scientists trying to develop a cognitive machine that mimics human learning. With this book, you'll explore key approaches to one-shot learning, such as metrics-based, model-based, and optimization-based techniques, all with the help of practical examples.

Hands-On One-shot Learning with Python will guide you through the exploration and design of deep learning models that can obtain information about an object from one or just a few training samples. The book begins with an overview of deep learning and one-shot learning and then introduces you to the different methods you can use to achieve it, such as deep learning architectures and probabilistic models. Once you've got to grips with the core principles, you'll explore real-world examples and implementations of one-shot learning using PyTorch 1.x on datasets such as Omniglot and MiniImageNet. Finally, you'll explore generative modeling-based methods and discover the key considerations for building systems that exhibit human-level intelligence.

By the end of this book, you'll be well-versed with the different one- and few-shot learning methods and be able to use them to build your own deep learning models.

What you will learn

  • Get to grips with the fundamental concepts of one- and few-shot learning
  • Work with different deep learning architectures for one-shot learning
  • Understand when to use one-shot and transfer learning, respectively
  • Study the Bayesian network approach for one-shot learning
  • Implement one-shot learning approaches based on metrics, models, and optimization in PyTorch
  • Discover different optimization algorithms that help to improve accuracy even with smaller volumes of data
  • Explore various one-shot learning architectures based on classification and regression

Who this book is for

If you're an AI researcher or a machine learning or deep learning expert looking to explore one-shot learning, this book is for you. It will help you get started with implementing various one-shot techniques to train models faster. Some Python programming experience is necessary to understand the concepts covered in this book.

Table of contents

  1. Title Page
  2. Copyright and Credits
    1. Hands-On One-shot Learning with Python
  3. About Packt
    1. Why subscribe?
  4. Contributors
    1. About the authors
    2. About the reviewer
    3. Packt is searching for authors like you
  5. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
      1. Download the example code files
      2. Download the color images
      3. Conventions used
    4. Get in touch
      1. Reviews
  6. Section 1: One-shot Learning Introduction
  7. Introduction to One-shot Learning
    1. Technical requirements
    2. The human brain – overview
      1. How the human brain learns
      2. Comparing human neurons and artificial neurons
    3. Machine learning – historical overview
      1. Challenges in machine learning and deep learning
    4. One-shot learning – overview
      1. Prerequisites of one-shot learning
      2. Types of one-shot learning
    5. Setting up your environment
    6. Coding exercise
      1. kNN – basic one-shot learning
    7. Summary
    8. Questions
  8. Section 2: Deep Learning Architectures
  9. Metrics-Based Methods
    1. Technical requirements
    2. Parametric methods – an overview
      1. Neural networks – learning procedure
      2. Visualizing parameters
    3. Understanding Siamese networks
      1. Architecture
      2. Preprocessing
      3. Contrastive loss function
      4. Triplet loss function
        1. Applications
    4. Understanding matching networks
      1. Model architecture
        1. Training procedure
        2. Modeling level – the matching networks architecture
    5. Coding exercise
      1. Siamese networks – the MNIST dataset
      2. Matching networks – the Omniglot dataset
    6. Summary
    7. Questions
    8. Further reading
  10. Model-Based Methods
    1. Technical requirements
    2. Understanding Neural Turing Machines
      1. Architecture of an NTM
      2. Modeling
        1. Reading
        2. Writing
        3. Addressing
    3. Memory-augmented neural networks
      1. Reading
      2. Writing
    4. Understanding meta networks
      1. Algorithm of meta networks
        1. Algorithm
    5. Coding exercises
      1. Implementation of NTM
      2. Implementation of MAAN
    6. Summary
    7. Questions
    8. Further reading
  11. Optimization-Based Methods
    1. Technical requirements
    2. Overview of gradient descent
    3. Understanding model-agnostic meta-learning
      1. Understanding the logic behind MAML
        1. Algorithm
      2. MAML application – domain-adaptive meta-learning
    4. Understanding LSTM meta-learner
      1. Architecture of the LSTM meta-learner
        1. Data preprocessing
        2. Algorithm – pseudocode implementation
    5. Exercises
      1. A simple implementation of model-agnostic meta-learning
      2. A simple implementation of domain-adaption meta-learning
    6. Summary
    7. Questions
    8. Further reading
  12. Section 3: Other Methods and Conclusion
  13. Generative Modeling-Based Methods
    1. Technical requirements
    2. Overview of Bayesian learning
    3. Understanding directed graphical models
    4. Overview of probabilistic methods
    5. Bayesian program learning
      1. Model
        1. Type generation
        2. Token generation
        3. Image generation
    6. Discriminative k-shot learning
      1. Representational learning
      2. Probabilistic model of the weights
        1. Choosing a model for the weights
      3. Computation and approximation for each phase
        1. Phase 1 – representation learning
        2. Phase 2 – concept learning
        3. Phase 3 – k-shot learning
        4. Phase 4 – k-shot testing
    7. Summary
    8. Further reading
  14. Conclusions and Other Approaches
    1. Recent advancements
      1. Object detection in few-shot domains
      2. Image segmentation in few-shot domains
    2. Related fields
      1. Semi-supervised learning
      2. Imbalanced learning
      3. Meta-learning
      4. Transfer learning
    3. Applications
    4. Further reading
  15. Other Books You May Enjoy
    1. Leave a review - let other readers know what you think

Product information

  • Title: Hands-On One-shot Learning with Python
  • Author(s): Shruti Jadon, Ankush Garg
  • Release date: April 2020
  • Publisher(s): Packt Publishing
  • ISBN: 9781838825461