Skip to content
  • Sign In
  • Try Now
View all events
Transformers

Hands on NLP with Transformers

Published by Pearson

Intermediate content levelIntermediate

Using Transformer-derived architectures to solve modern NLP problems

This training will provide an introduction to the novel transformer architecture which is currently considered state of the art for modern NLP tasks. We will take a deep dive into what makes the transformer unique in its ability to process natural language including attention and encoder-decoder architectures. We will see several examples of how people and companies are using transformers to solve a wide variety of NLP tasks including conversation-holding, image captioning, reading comprehension, and more.

This training will feature several code-driven examples of transformer-derived architectures including BERT, GPT, T5, and the Vision Transformer. Each of our case studies will be inspired by real use-cases and will lean on transfer learning to expedite our process while using actionable metrics to drive results.

What you’ll learn and how you can apply it

  • What makes the transformer architecture state-of-the-art and unique for NLP tasks
  • How transformers are applied to solve NLP tasks
  • How to use transfer learning to boost transformer performance

This live event is for you because...

  • You’re an advanced Machine Learning Engineer with experience with ML, Neural Networks, and NLP
  • You’re interested in state-of-the-art NLP Architecture
  • You are comfortable using libraries like Tensorflow or PyTorch

Prerequisites

  • Python 3 proficiency with some familiarity working in interactive Python environments including Notebooks (Jupyter / Google Colab / Kaggle Kernels).
  • Comfort using libraries like Tensorflow or PyTorch

Course Set-up

  • A github repository with the slides / code / links will be provided upon completion
  • Attendees will need to have access to the notebooks in the github

Recommended Preparation

Recommended Follow-up

Schedule

The time frames are only estimates and may vary according to how the class is progressing.

Segment 1: History of NLP and Introduction to Transformers (30 min)

  • History of using AI to process text
  • Introduction to Attention and Self-attention
  • How Transformers use attention to process text
  • Introduction to transfer learning

Segment 2: Use-case 1: Sequence Classification with BERT and XLNET (40 min)

  • Introduction to BERT & XLNET
  • Fine-tuning BERT & XLNET for multi-label classification

Break / Q&A (15 min)

Segment 3: Use-case 2: Generating LaTeX with GPT-2 (40 min)

  • Introduction to the GPT family of architectures
  • Fine-tuning GPT-2 to convert text to equations

Segment 4: Use-case 3: Abstractive Text Summarization with T5 (30 min)

  • Introduction to T5
  • Using T5 to generate meaningful abstractive text summarizations

Break / Q&A (15 min)

Segment 5: Use-case 4: Image Captioning (40 min)

  • Introduction to the vision transformer
  • Fine-tuning an image captioning architecture using the vision transformer

Segment 6: Course wrap-up and next steps (15 min)

  • Other Transformer use-cases
  • Pulling from HuggingFace’s library of fine-tuned transformers

Q&A (15 min)

Your Instructor

  • Sinan Ozdemir

    Sinan Ozdemir is founder and CTO of LoopGenius, where he uses state-of-the-art AI to help people create and run their businesses. He has lectured in data science at Johns Hopkins University and authored multiple books, videos and numerous online courses on data science, machine learning, and generative AI. He also founded the recently acquired Kylie.ai, an enterprise-grade conversational AI platform with RPA capabilities. Sinan most recently published Quick Guide to Large Language Models, and launched a podcast audio series, AI Unveiled. Ozdemir holds a master’s degree in pure mathematics from Johns Hopkins University.

    linkedinXlinksearch