Video description
27+ Hours of Video InstructionAn outstanding data scientist or machine learning engineer must master more than the basics of using ML algorithms with the most popular libraries, such as scikit-learn and Keras. To train innovative models or deploy them to run performantly in production, an in-depth appreciation of machine learning theory is essential, which includes a working understanding of the foundational subjects of linear algebra, calculus, probability, statistics, data structures, and algorithms.
When the foundations of machine learning are firm, it becomes easier to make the jump from general ML principles to specialized ML domains, such as deep learning, natural language processing, machine vision, and reinforcement learning. The more specialized the application, the more likely its implementation details are available only in academic papers or graduate-level textbooks, either of which assume an understanding of the foundational subjects.
This master class includes the following courses:
- Linear Algebra for Machine Learning
- Calculus for Machine Learning LiveLessons
- Probability and Statistics for Machine Learning
- Data Structures, Algorithms, and Machine Learning Optimization
Calculus for Machine Learning LiveLessons introduces the mathematical field of calculus—the study of rates of change—from the ground up. It is essential because computing derivatives via differentiation is the basis of optimizing most machine learning algorithms, including those used in deep learning, such as backpropagation and stochastic gradient descent.
Probability and Statistics for Machine Learning (Machine Learning Foundations) LiveLessons provides you with a functional, hands-on understanding of probability theory and statistical modeling, with a focus on machine learning applications.
Data Structures, Algorithms, and Machine Learning Optimization LiveLessons provides you with a functional, hands-on understanding of the essential computer science for machine learning applications.
About the Instructor Jon Krohn is Chief Data Scientist at the machine learning company Nebula. He authored the book Deep Learning Illustrated, an instant #1 bestseller that was translated into seven languages. He is also the host of SuperDataScience, the industry’s most listened-to podcast. Jon is renowned for his compelling lectures, which he offers at Columbia University, New York University, leading industry conferences, via O'Reilly, and via his award-winning YouTube channel. He holds a PhD from Oxford and has been publishing on machine learning in prominent academic journals since 2010; his papers have been cited more than a thousand times.
Course Requirements
- Mathematics: Familiarity with secondary school-level mathematics will make the course easier to follow. If you are comfortable dealing with quantitative information—such as understanding charts and rearranging simple equations—then you should be well-prepared to follow along with all of the mathematics.
- Programming: All code demos are in Python so experience with it or another object-oriented programming language would be helpful for following along with the hands-on examples.
Pearson publishes expert-led video tutorials covering a wide selection of technology topics designed to teach you the skills you need to succeed. These professional and personal technology videos feature world-leading author instructors published by your trusted technology brands: Addison-Wesley, Cisco Press, Pearson IT Certification, Sams, and Que. Topics include: IT Certification, Network Security, Cisco Technology, Programming, Web Development, Mobile Development, and more. Learn more about Pearson Video training at http://www.informit.com/video.
Table of contents
- Linear Algebra for Machine Learning (Machine Learning Foundations): Introduction
- Lesson 1: Orientation to Linear Algebra
- Lesson 2: Data Structures for Algebra
- Lesson 3: Common Tensor Operations
- Lesson 4: Solving Linear Systems
- Lesson 5: Matrix Multiplication
- Lesson 6: Special Matrices and Matrix Operations
- Lesson 7: Eigenvectors and Eigenvalues
- Lesson 8: Matrix Determinants and Decomposition
- Lesson 9: Machine Learning with Linear Algebra
- Summary
- Calculus for Machine Learning: Introduction
- Lesson 1: Orientation to Calculus
- Lesson 2: Limits
- Lesson 3: Differentiation
- Lesson 4: Advanced Differentiation Rules
- Lesson 5: Automatic Differentiation
- Lesson 6: Partial Derivatives
- Lesson 7: Gradients
- Lesson 8: Integrals
- Summary
- Probability and Statistics for Machine Learning: Introduction
-
Lesson 1: Introduction to Probability
- Topics
- 1.1 Orientation to the Machine Learning Foundations Series
- 1.2 What Probability Theory Is
- 1.3 Events and Sample Spaces
- 1.4 Multiple Observations
- 1.5 Factorials and Combinatorics
- 1.6 Exercises
- 1.7 The Law of Large Numbers and the Gambler's Fallacy
- 1.8 Probability Distributions in Statistics
- 1.9 Bayesian versus Frequentist Statistics
- 1.10 Applications of Probability to Machine Learning
- Lesson 2: Random Variables
- Lesson 3: Describing Distributions
- Lesson 4: Relationships Between Probabilities
- Lesson 5: Distributions in Machine Learning
- Lesson 6: Information Theory
- Lesson 7: Introduction to Statistics
- Lesson 8: Comparing Means
- Lesson 9: Correlation
-
Lesson 10: Regression
- Topics
- 10.1 Independent versus Dependent Variables
- 10.2 Linear Regression to Predict Continuous Values
- 10.3 Fitting a Line to Points on a Cartesian Plane
- 10.4 Linear Least Squares Exercise
- 10.5 Ordinary Least Squares
- 10.6 Categorical "Dummy" Features
- 10.7 Logistic Regression to Predict Categories
- 10.8 Open-Ended Exercises
- Lesson 11: Bayesian Statistics
- Summary
- Data Structures, Algorithms, and Machine Learning Optimization: Introduction
- Lesson 1: Orientation to Data Structures and Algorithms
- Lesson 2: "Big O" Notation
- Lesson 3: List-Based Data Structures
- Lesson 4: Searching and Sorting
- Lesson 5: Sets and Hashing
- Lesson 6: Trees
- Lesson 7: Graphs
-
Lesson 8: Machine Learning Optimization
- Topics
- 8.1 Statistics versus Machine Learning
- 8.2 Objective Functions
- 8.3 Mean Absolute Error
- 8.4 Mean Squared Error
- 8.5 Minimizing Cost with Gradient Descent
- 8.6 Gradient Descent from Scratch with PyTorch
- 8.7 Critical Points
- 8.8 Stochastic Gradient Descent
- 8.9 Learning Rate Scheduling
- 8.10 Maximizing Reward with Gradient Ascent
- Lesson 9: Fancy Deep Learning Optimizers
- Summary
Product information
- Title: The Essential Machine Learning Foundations: Math, Probability, Statistics, and Computer Science (Video Collection)
- Author(s):
- Release date: March 2022
- Publisher(s): Pearson
- ISBN: 0137903243
You might also like
video
Python Fundamentals with Paul Deitel
Expanded in 2024 with 22 New Features through Python 3.12 50 hours of video instruction—Includes Paul’s …
book
Intro to Python for Computer Science and Data Science: Learning to Program with AI, Big Data and The Cloud
This is the eBook of the printed book and may not include any media, website access …
book
Deep Learning for Coders with fastai and PyTorch
Deep learning is often viewed as the exclusive domain of math PhDs and big tech companies. …
book
Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 3rd Edition
Through a recent series of breakthroughs, deep learning has boosted the entire field of machine learning. …