When working with machine learning, it’s easy to try them all out without understanding what each model does, and when to use them. In this cheat sheet, you’ll find a handy guide describing the most widely used machine learning models, their advantages, disadvantages, and some key use-cases.
Supervised Learning
Supervised learning models are models that map inputs to outputs, and attempt to extrapolate patterns learned in past data on unseen data. Supervised learning models can be either regression models, where we try to predict a continuous variable, like stock prices—or classification models, where we try to predict a binary or multi-class variable, like whether a customer will churn or not. In the section below, we’ll explain two popular types of supervised learning models: linear models, and tree-based models.
Linear Models
In a nutshell, linear models create a best-fit line to predict unseen data. Linear models imply that outputs are a linear combination of features. In this section, we’ll specify commonly used linear models in machine learning, their advantages, and disadvantages.
Algorithm | Description | Applications | Advantages | Disadvantages |
Linear Regression | A simple algorithm that models a linear relationship between inputs and a continuous numerical output variable | Stock Price PredictionPredicting housing pricesPredicting customer lifetime value | Explainable methodInterpretable results by its output coefficientFaster to train than other machine learning models | Assumes linearity between inputs and outputSensitive to outliersCan underfit with small, high-dimensional data |
Logistic Regression | A simple algorithm that models a linear relationship between inputs and a categorical output (1 or 0) | Predicting credit risk scoreCustomer churn prediction | Interpretable and explainableLess prone to overfitting when using regularizationApplicable for multi-class predictions | Assumes linearity between inputs and outputsCan overfit with small, high-dimensional data |
Ridge Regression | Part of the regression family — it penalizes features that have low predictive outcomes by shrinking their coefficients closer to zero. Can be used for classification or regression | Predictive maintenance for automobilesSales revenue prediction | Less prone to overfittingBest suited where data suffer from multicollinearityExplainable & interpretable | All the predictors are kept in the final modelDoesn’t perform feature selection |
Lasso Regression | Part of the regression family — it penalizes features that have low predictive outcomes by shrinking their coefficients to zero. Can be used for classification or regression | Predicting housing pricesPredicting clinical outcomes based on health data | Less prone to overfittingCan handle high-dimensional dataNo need for feature selection | Can lead to poor interpretability as it can keep highly correlated variables |
Tree-based models
In a nutshell, tree-based models use a series of “if-then” rules to predict from decision trees. In this section, we’ll specify commonly used linear models in machine learning, their advantages, and disadvantages.
Algorithm | Description | Applications | Advantages | Disadvantages |
Decision Tree | Decision Tree models make decision rules on the features to produce predictions. It can be used for classification or regression | Customer churn predictionCredit score modelingDisease prediction | Explainable and interpretableCan handle missing values | Prone to overfittingSensitive to outliers |
Random Forests | An ensemble learning method that combines the output of multiple decision trees | Credit score modelingPredicting housing prices | Reduces overfittingHigher accuracy compared to other models | Training complexity can be highNot very interpretable |
Gradient Boosting Regression | Gradient Boosting Regression employs boosting to make predictive models from an ensemble of weak predictive learners | Predicting car emissionsPredicting ride-hailing fare amount | Better accuracy compared to other regression modelsIt can handle multicollinearity It can handle non-linear relationships | Sensitive to outliers and can therefore cause overfittingComputationally expensive and has high complexity |
XGBoost | Gradient Boosting algorithm that is efficient & flexible. Can be used for both classification and regression tasks | Churn predictionClaims processing in insurance | Provides accurate resultsCaptures non-linear relationships | Hyperparameter tuning can be complexDoes not perform well on sparse datasets |
LightGBM Regressor | A gradient boosting framework that is designed to be more efficient than other implementations | Predicting flight time for airlinesPredicting cholesterol levels based on health data | Can handle large amounts of dataComputational efficient & fast training speedLow memory usage | Can overfit due to leaf-wise splitting and high sensitivityHyperparameter tuning can be complex |
Unsupervised Learning
Unsupervised learning is about discovering general patterns in data. The most popular example is clustering or segmenting customers and users. This type of segmentation is generalizable and can be applied broadly, such as to documents, companies, and genes. Unsupervised learning consists of clustering models, that learn how to group similar data points together, or association algorithms, that group different data points based on pre-defined rules.
Clustering models
Algorithm | Description | Applications | Advantages | Disadvantages |
K-Means | K-Means is the most widely used clustering approach—it determines K clusters based on euclidean distances | Customer segmentationRecommendation systems | Scales to large datasetsSimple to implement and interpretResults in tight clusters | Requires the expected number of clusters from the beginningHas troubles with varying cluster sizes and densities |
Hierarchical Clustering | A “bottom-up” approach where each data point is treated as its own cluster—and then the closest two clusters are merged together iteratively | Fraud detectionDocument clustering based on similarity | There is no need to specify the number of clustersThe resulting dendrogram is informative | Doesn’t always result in the best clusteringNot suitable for large datasets due to high complexity |
Gaussian Mixture Models | A probabilistic model for modeling normally distributed clusters within a dataset | Customer segmentationRecommendation systems | Computes a probability for an observation belonging to a clusterCan identify overlapping clustersMore accurate results compared to K-means | Requires complex tuningRequires setting the number of expected mixture components or clusters |
Association
Algorithm | Description | Applications | Advantages | Disadvantages |
Apriori Algorithm | Rule based approach that identifies the most frequent itemset in a given dataset where prior knowledge of frequent itemset properties is used | Product placementsRecommendation enginesPromotion optimization | Results are intuitive and InterpretableExhaustive approach as it finds all rules based on the confidence and support | Generates many uninteresting itemsetsComputationally and memory intensive. Results in many overlapping item sets |