Ridge regression modifies the ordinary least squares (OLS) estimator by adding a penalty equal to the square of the magnitude of the coefficients, multiplied by a tuning parameter (λ). The objective function for ridge regression is given by:
Here, \( \lambda \) is the tuning parameter, \( y_i \) are the observed outcomes, \( \beta \) are the coefficients, and \( x_{ij} \) are the predictor variables. The penalty term, \( \lambda \sum_{j=1}^{p} \beta_j^2 \), ensures that the model coefficients are shrunk towards zero, which helps to reduce variance and improve model generalizability.