The Essential Machine Learning Foundations: Math, Probability, Statistics, and Computer Science (Video Collection)

Video description

27+ Hours of Video Instruction

An outstanding data scientist or machine learning engineer must master more than the basics of using ML algorithms with the most popular libraries, such as scikit-learn and Keras. To train innovative models or deploy them to run performantly in production, an in-depth appreciation of machine learning theory is essential, which includes a working understanding of the foundational subjects of linear algebra, calculus, probability, statistics, data structures, and algorithms.

When the foundations of machine learning are firm, it becomes easier to make the jump from general ML principles to specialized ML domains, such as deep learning, natural language processing, machine vision, and reinforcement learning. The more specialized the application, the more likely its implementation details are available only in academic papers or graduate-level textbooks, either of which assume an understanding of the foundational subjects.

This master class includes the following courses:
  • Linear Algebra for Machine Learning
  • Calculus for Machine Learning LiveLessons
  • Probability and Statistics for Machine Learning
  • Data Structures, Algorithms, and Machine Learning Optimization
Linear Algebra for Machine Learning LiveLessons provides you with an understanding of the theory and practice of linear algebra, with a focus on machine learning applications.

Calculus for Machine Learning LiveLessons introduces the mathematical field of calculus—the study of rates of change—from the ground up. It is essential because computing derivatives via differentiation is the basis of optimizing most machine learning algorithms, including those used in deep learning, such as backpropagation and stochastic gradient descent.

Probability and Statistics for Machine Learning (Machine Learning Foundations) LiveLessons provides you with a functional, hands-on understanding of probability theory and statistical modeling, with a focus on machine learning applications.

Data Structures, Algorithms, and Machine Learning Optimization LiveLessons provides you with a functional, hands-on understanding of the essential computer science for machine learning applications.

About the Instructor Jon Krohn is Chief Data Scientist at the machine learning company Nebula. He authored the book Deep Learning Illustrated, an instant #1 bestseller that was translated into seven languages. He is also the host of SuperDataScience, the industry’s most listened-to podcast. Jon is renowned for his compelling lectures, which he offers at Columbia University, New York University, leading industry conferences, via O'Reilly, and via his award-winning YouTube channel. He holds a PhD from Oxford and has been publishing on machine learning in prominent academic journals since 2010; his papers have been cited more than a thousand times.

Course Requirements
  • Mathematics: Familiarity with secondary school-level mathematics will make the course easier to follow. If you are comfortable dealing with quantitative information‚Äîsuch as understanding charts and rearranging simple equations‚Äîthen you should be well-prepared to follow along with all of the mathematics.
  • Programming: All code demos are in Python so experience with it or another object-oriented programming language would be helpful for following along with the hands-on examples.
About Pearson Video Training

Pearson publishes expert-led video tutorials covering a wide selection of technology topics designed to teach you the skills you need to succeed. These professional and personal technology videos feature world-leading author instructors published by your trusted technology brands: Addison-Wesley, Cisco Press, Pearson IT Certification, Sams, and Que. Topics include: IT Certification, Network Security, Cisco Technology, Programming, Web Development, Mobile Development, and more. Learn more about Pearson Video training at http://www.informit.com/video.

Table of contents

  1. Linear Algebra for Machine Learning (Machine Learning Foundations): Introduction
    1. Introduction
  2. Lesson 1: Orientation to Linear Algebra
    1. Topics
    2. 1.1 Defining Linear Algebra
    3. 1.2 Solving a System of Equations Algebraically
    4. 1.3 Linear Algebra in Machine Learning and Deep Learning
    5. 1.4 Historical and Contemporary Applications
    6. 1.5 Exercise
  3. Lesson 2: Data Structures for Algebra
    1. Topics
    2. 2.1 Tensors
    3. 2.2 Scalars
    4. 2.3 Vectors and Vector Transposition
    5. 2.4 Norms and Unit Vectors
    6. 2.5 Basis, Orthogonal, and Orthonormal Vectors
    7. 2.6 Matrices
    8. 2.7 Generic Tensor Notation
    9. 2.8 Exercises
  4. Lesson 3: Common Tensor Operations
    1. Topics
    2. 3.1 Tensor Transposition
    3. 3.2 Basic Tensor Arithmetic
    4. 3.3 Reduction
    5. 3.4 The Dot Product
    6. 3.5 Exercises
  5. Lesson 4: Solving Linear Systems
    1. Topics
    2. 4.1 The Substitution Strategy
    3. 4.2 Substitution Exercises
    4. 4.3 The Elimination Strategy
    5. 4.4 Elimination Exercises
  6. Lesson 5: Matrix Multiplication
    1. Topics
    2. 5.1 Matrix-by-Vector Multiplication
    3. 5.2 Matrix-by-Matrix Multiplication
    4. 5.3 Symmetric and Identity Matrices
    5. 5.4 Exercises
    6. 5.5 Machine Learning and Deep Learning Applications
  7. Lesson 6: Special Matrices and Matrix Operations
    1. Topics
    2. 6.1 The Frobenius Norm
    3. 6.2 Matrix Inversion
    4. 6.3 Diagonal Matrices
    5. 6.4 Orthogonal Matrices
    6. 6.5 The Trace Operator
  8. Lesson 7: Eigenvectors and Eigenvalues
    1. Topics
    2. 7.1 The Eigenconcept
    3. 7.2 Exercises
    4. 7.3 Eigenvectors in Python
    5. 7.4 High-Dimensional Eigenvectors
  9. Lesson 8: Matrix Determinants and Decomposition
    1. Topics
    2. 8.1 The Determinant of a 2 x 2 Matrix
    3. 8.2 The Determinants of Larger Matrices
    4. 8.3 Exercises
    5. 8.4 Determinants and Eigenvalues
    6. 8.5 Eigendecomposition
  10. Lesson 9: Machine Learning with Linear Algebra
    1. Topics
    2. 9.1 Singular Value Decomposition
    3. 9.2 Media File Compression
    4. 9.3 The Moore-Penrose Pseudoinverse
    5. 9.4 Regression via Pseudoinversion
    6. 9.5 Principal Component Analysis
    7. 9.6 Resources for Further Study of Linear Algebra
  11. Summary
    1. Linear Algebra for Machine Learning (Machine Learning Foundations): Summary
  12. Calculus for Machine Learning: Introduction
    1. Introduction
  13. Lesson 1: Orientation to Calculus
    1. Topics
    2. 1.1 Differential versus Integral Calculus
    3. 1.2 A Brief History
    4. 1.3 Calculus of the Infinitesimals
    5. 1.4 Modern Applications
  14. Lesson 2: Limits
    1. Topics
    2. 2.1 Continuous versus Discontinuous Functions
    3. 2.2 Solving via Factoring
    4. 2.3 Solving via Approaching
    5. 2.4 Approaching Infinity
    6. 2.5 Exercises
  15. Lesson 3: Differentiation
    1. Topics
    2. 3.1 Delta Method
    3. 3.2 The Most Common Representation
    4. 3.3 Derivative Notation
    5. 3.4 Constants
    6. 3.5 Power Rule
    7. 3.6 Constant Product Rule
    8. 3.7 Sum Rule
    9. 3.8 Exercises
  16. Lesson 4: Advanced Differentiation Rules
    1. Topics
    2. 4.1 Product Rule
    3. 4.2 Quotient Rule
    4. 4.3 Chain Rule
    5. 4.4 Exercises
    6. 4.5 Power Rule on a Function Chain
  17. Lesson 5: Automatic Differentiation
    1. Topics
    2. 5.1 Introduction
    3. 5.2 Autodiff with PyTorch
    4. 5.3 Autodiff with TensorFlow
    5. 5.4 Directed Acyclic Graph of a Line Equation
    6. 5.5 Fitting a Line with Machine Learning
  18. Lesson 6: Partial Derivatives
    1. Topics
    2. 6.1 Derivatives of Multivariate Functions
    3. 6.2 Partial Derivative Exercises
    4. 6.3 Geometrical Examples
    5. 6.4 Geometrical Exercises
    6. 6.5 Notation
    7. 6.6 Chain Rule
    8. 6.7 Chain Rule Exercises
  19. Lesson 7: Gradients
    1. Topics
    2. 7.1 Single-Point Regression
    3. 7.2 Partial Derivatives of Quadratic Cost
    4. 7.3 Descending the Gradient of Cost
    5. 7.4 Gradient of Mean Squared Error
    6. 7.5 Backpropagation
    7. 7.6 Higher-Order Partial Derivatives
    8. 7.7 Exercise
  20. Lesson 8: Integrals
    1. Topics
    2. 8.1 Binary Classification
    3. 8.2 The Confusion Matrix and ROC Curve
    4. 8.3 Indefinite Integrals
    5. 8.4 Definite Integrals
    6. 8.5 Numeric Integration with Python
    7. 8.6 Exercises
    8. 8.7 Finding the Area Under the ROC Curve
    9. 8.8 Resources for Further Study of Calculus
  21. Summary
    1. Calculus for Machine Learning: Summary
  22. Probability and Statistics for Machine Learning: Introduction
    1. Introduction
  23. Lesson 1: Introduction to Probability
    1. Topics
    2. 1.1 Orientation to the Machine Learning Foundations Series
    3. 1.2 What Probability Theory Is
    4. 1.3 Events and Sample Spaces
    5. 1.4 Multiple Observations
    6. 1.5 Factorials and Combinatorics
    7. 1.6 Exercises
    8. 1.7 The Law of Large Numbers and the Gambler's Fallacy
    9. 1.8 Probability Distributions in Statistics
    10. 1.9 Bayesian versus Frequentist Statistics
    11. 1.10 Applications of Probability to Machine Learning
  24. Lesson 2: Random Variables
    1. Topics
    2. 2.1 Discrete and Continuous Variables
    3. 2.2 Probability Mass Functions
    4. 2.3 Probability Density Functions
    5. 2.4 Exercises on Probability Functions
    6. 2.5 Expected Value
    7. 2.6 Exercises on Expected Value
  25. Lesson 3: Describing Distributions
    1. Topics
    2. 3.1 The Mean, a Measure of Central Tendency
    3. 3.2 Medians
    4. 3.3 Modes
    5. 3.4 Quantiles: Percentiles, Quartiles, and Deciles
    6. 3.5 Box-and-Whisker Plots
    7. 3.6 Variance, a Measure of Dispersion
    8. 3.7 Standard Deviation
    9. 3.8 Standard Error
    10. 3.9 Covariance, a Measure of Relatedness
    11. 3.10. Correlation
  26. Lesson 4: Relationships Between Probabilities
    1. Topics
    2. 4.1 Joint Probability Distribution
    3. 4.2 Marginal Probability
    4. 4.3 Conditional Probability
    5. 4.4 Exercises
    6. 4.5 Chain Rule of Probabilities
    7. 4.6 Independent Random Variables
    8. 4.7 Conditional Independence
  27. Lesson 5: Distributions in Machine Learning
    1. Topics
    2. 5.1 Uniform
    3. 5.2 Gaussian: Normal and Standard Normal
    4. 5.3 The Central Limit Theorem
    5. 5.4 Log-Normal
    6. 5.5 Exponential and Laplace
    7. 5.6 Binomial and Multinomial
    8. 5.7 Poisson
    9. 5.8 Mixture Distributions
    10. 5.9 Preprocessing Data for Model Input
    11. 5.10 Exercises
  28. Lesson 6: Information Theory
    1. Topics
    2. 6.1 What Information Theory Is
    3. 6.2 Self-Information, Nats, and Bits
    4. 6.3 Shannon and Differential Entropy
    5. 6.4 Kullback-Leibler Divergence and Cross-Entropy
  29. Lesson 7: Introduction to Statistics
    1. Topics
    2. 7.1 Applications of Statistics to Machine Learning
    3. 7.2 Review of Essential Probability Theory
    4. 7.3 z-scores and Outliers
    5. 7.4 Exercises on z-scores
    6. 7.5 p-values
    7. 7.6 Exercises on p-values
  30. Lesson 8: Comparing Means
    1. Topics
    2. 8.1 Single-Sample t-tests and Degrees of Freedom
    3. 8.2 Independent t-tests
    4. 8.3 Paired t-tests
    5. 8.4 Applications to Machine Learning
    6. 8.5 Exercises
    7. 8.6 Confidence Intervals
    8. 8.7 ANOVA: Analysis of Variance
  31. Lesson 9: Correlation
    1. Topics
    2. 9.1 The Pearson Correlation Coefficient
    3. 9.2 R-squared Coefficient of Determination
    4. 9.3 Correlation versus Causation
    5. 9.4 Correcting for Multiple Comparisons
  32. Lesson 10: Regression
    1. Topics
    2. 10.1 Independent versus Dependent Variables
    3. 10.2 Linear Regression to Predict Continuous Values
    4. 10.3 Fitting a Line to Points on a Cartesian Plane
    5. 10.4 Linear Least Squares Exercise
    6. 10.5 Ordinary Least Squares
    7. 10.6 Categorical "Dummy" Features
    8. 10.7 Logistic Regression to Predict Categories
    9. 10.8 Open-Ended Exercises
  33. Lesson 11: Bayesian Statistics
    1. Topics
    2. 11.1 Machine Learning versus Frequentist Statistics
    3. 11.2 When to Use Bayesian Statistics
    4. 11.3 Prior Probabilities
    5. 11.4 Bayes' Theorem
    6. 11.5 Resources for Further Study of Probability and Statistics
  34. Summary
    1. Probability and Statistics for Machine Learning: Summary
  35. Data Structures, Algorithms, and Machine Learning Optimization: Introduction
    1. Introduction
  36. Lesson 1: Orientation to Data Structures and Algorithms
    1. Topics
    2. 1.1 Orientation to the Machine Learning Foundations Series
    3. 1.2 A Brief History of Data
    4. 1.3 A Brief History of Algorithms
    5. 1.4 Applications to Machine Learning
  37. Lesson 2: "Big O" Notation
    1. Topics
    2. 2.1 Introduction
    3. 2.2 Constant Time
    4. 2.3 Linear Time
    5. 2.4 Polynomial Time
    6. 2.5 Common Runtimes
    7. 2.6 Best versus Worst Case
  38. Lesson 3: List-Based Data Structures
    1. Topics
    2. 3.1 Lists
    3. 3.2 Arrays
    4. 3.3 Linked Lists
    5. 3.4 Doubly-Linked Lists
    6. 3.5 Stacks
    7. 3.6 Queues
    8. 3.7 Deques
  39. Lesson 4: Searching and Sorting
    1. Topics
    2. 4.1 Binary Search
    3. 4.2 Bubble Sort
    4. 4.3 Merge Sort
    5. 4.4 Quick Sort
  40. Lesson 5: Sets and Hashing
    1. Topics
    2. 5.1 Maps and Dictionaries
    3. 5.2 Sets
    4. 5.3 Hash Functions
    5. 5.4 Collisions
    6. 5.5 Load Factor
    7. 5.6 Hash Maps
    8. 5.7 String Keys
    9. 5.8 Hashing in ML
  41. Lesson 6: Trees
    1. Topics
    2. 6.1 Introduction
    3. 6.2 Decision Trees
    4. 6.3 Random Forests
    5. 6.4 XGBoost: Gradient-Boosted Trees
    6. 6.5 Additional Concepts
  42. Lesson 7: Graphs
    1. Topics
    2. 7.1 Introduction
    3. 7.2 Directed versus Undirected Graphs
    4. 7.3 DAGs: Directed Acyclic Graphs
    5. 7.4 Additional Concepts
    6. 7.5 Bonus: Pandas DataFrames
    7. 7.6 Resources for Further Study of DSA
  43. Lesson 8: Machine Learning Optimization
    1. Topics
    2. 8.1 Statistics versus Machine Learning
    3. 8.2 Objective Functions
    4. 8.3 Mean Absolute Error
    5. 8.4 Mean Squared Error
    6. 8.5 Minimizing Cost with Gradient Descent
    7. 8.6 Gradient Descent from Scratch with PyTorch
    8. 8.7 Critical Points
    9. 8.8 Stochastic Gradient Descent
    10. 8.9 Learning Rate Scheduling
    11. 8.10 Maximizing Reward with Gradient Ascent
  44. Lesson 9: Fancy Deep Learning Optimizers
    1. Topics
    2. 9.1 Jacobian Matrices
    3. 9.2 Second-Order Optimization and Hessians
    4. 9.3 Momentum
    5. 9.4 Adaptive Optimizers
    6. 9.5 Congratulations and Next Steps
  45. Summary
    1. Data Structures, Algorithms, and Machine Learning Optimization: Summary

Product information

  • Title: The Essential Machine Learning Foundations: Math, Probability, Statistics, and Computer Science (Video Collection)
  • Author(s): Jon Krohn
  • Release date: March 2022
  • Publisher(s): Pearson
  • ISBN: 0137903243