Book description
Extract accurate information from data to train and improve machine learning models using NumPy, SciPy, pandas, and scikit-learn libraries
Key Features
- Discover solutions for feature generation, feature extraction, and feature selection
- Uncover the end-to-end feature engineering process across continuous, discrete, and unstructured datasets
- Implement modern feature extraction techniques using Python's pandas, scikit-learn, SciPy and NumPy libraries
Book Description
Feature engineering is invaluable for developing and enriching your machine learning models. In this cookbook, you will work with the best tools to streamline your feature engineering pipelines and techniques and simplify and improve the quality of your code.
Using Python libraries such as pandas, scikit-learn, Featuretools, and Feature-engine, you'll learn how to work with both continuous and discrete datasets and be able to transform features from unstructured datasets. You will develop the skills necessary to select the best features as well as the most suitable extraction techniques. This book will cover Python recipes that will help you automate feature engineering to simplify complex processes. You'll also get to grips with different feature engineering strategies, such as the box-cox transform, power transform, and log transform across machine learning, reinforcement learning, and natural language processing (NLP) domains.
By the end of this book, you'll have discovered tips and practical solutions to all of your feature engineering problems.
What you will learn
- Simplify your feature engineering pipelines with powerful Python packages
- Get to grips with imputing missing values
- Encode categorical variables with a wide set of techniques
- Extract insights from text quickly and effortlessly
- Develop features from transactional data and time series data
- Derive new features by combining existing variables
- Understand how to transform, discretize, and scale your variables
- Create informative variables from date and time
Who this book is for
This book is for machine learning professionals, AI engineers, data scientists, and NLP and reinforcement learning engineers who want to optimize and enrich their machine learning models with the best features. Knowledge of machine learning and Python coding will assist you with understanding the concepts covered in this book.
Table of contents
- Title Page
- Copyright and Credits
- About Packt
- Contributors
- Preface
-
Foreseeing Variable Problems When Building ML Models
- Technical requirements
- Identifying numerical and categorical variables
- Quantifying missing data
- Determining cardinality in categorical variables
- Pinpointing rare categories in categorical variables
- Identifying a linear relationship
- Identifying a normal distribution
- Distinguishing variable distribution
- Highlighting outliers
- Comparing feature magnitude
-
Imputing Missing Data
- Technical requirements
- Removing observations with missing data
- Performing mean or median imputation
- Implementing mode or frequent category imputation
- Replacing missing values with an arbitrary number
- Capturing missing values in a bespoke category
- Replacing missing values with a value at the end of the distribution
- Implementing random sample imputation
- Adding a missing value indicator variable
- Performing multivariate imputation by chained equations
- Assembling an imputation pipeline with scikit-learn
- Assembling an imputation pipeline with Feature-engine
-
Encoding Categorical Variables
- Technical requirements
- Creating binary variables through one-hot encoding
- Performing one-hot encoding of frequent categories
- Replacing categories with ordinal numbers
- Replacing categories with counts or frequency of observations
- Encoding with integers in an ordered manner
- Encoding with the mean of the target
- Encoding with the Weight of Evidence
- Grouping rare or infrequent categories
- Performing binary encoding
- Performing feature hashing
-
Transforming Numerical Variables
- Technical requirements
- Transforming variables with the logarithm
- Transforming variables with the reciprocal function
- Using square and cube root to transform variables
- Using power transformations on numerical variables
- Performing Box-Cox transformation on numerical variables
- Performing Yeo-Johnson transformation on numerical variables
-
Performing Variable Discretization
- Technical requirements
- Dividing the variable into intervals of equal width
- Sorting the variable values in intervals of equal frequency
- Performing discretization followed by categorical encoding
- Allocating the variable values in arbitrary intervals
- Performing discretization with k-means clustering
- Using decision trees for discretization
- Working with Outliers
-
Deriving Features from Dates and Time Variables
- Technical requirements
- Extracting date and time parts from a datetime variable
- Deriving representations of the year and month
- Creating representations of day and week
- Extracting time parts from a time variable
- Capturing the elapsed time between datetime variables
- Working with time in different time zones
- Performing Feature Scaling
- Applying Mathematical Computations to Features
- Creating Features with Transactional and Time Series Data
- Extracting Features from Text Variables
- Other Books You May Enjoy
Product information
- Title: Python Feature Engineering Cookbook
- Author(s):
- Release date: January 2020
- Publisher(s): Packt Publishing
- ISBN: 9781789806311
You might also like
book
Python Feature Engineering Cookbook - Second Edition
Create end-to-end, reproducible feature engineering pipelines that can be deployed into production using open-source Python libraries …
book
Python Data Cleaning Cookbook
Discover how to describe your data in detail, identify data issues, and find out how to …
book
Machine Learning with Python Cookbook
This practical guide provides nearly 200 self-contained recipes to help you solve machine learning challenges you …
book
Machine Learning with Python Cookbook, 2nd Edition
This practical guide provides more than 200 self-contained recipes to help you solve machine learning challenges …