Overview
Empirical Risk Minimization (ERM) is a foundational concept in machine learning that focuses on minimizing the average loss on a dataset. By doing so, it aims to create models that not only fit the training data well but also generalize effectively to new, unseen data. Understanding ERM involves gra...
Key Terms
Example: Mean Squared Error (MSE) is a common loss function.
Example: A model that memorizes training examples instead of learning general patterns.
Example: A model that predicts accurately on test data after training.
Example: Using a validation set to tune model parameters.
Example: A model with high bias may underfit, while high variance may overfit.
Example: Using training data to adjust model parameters.