In essence, gradient boosting involves creating an ensemble of weak learners, typically decision trees, in a sequential manner. Each subsequent model attempts to correct the errors of the previous models. This is achieved by focusing on the residuals – the differences between the actual and predicted values. The algorithm iteratively adds new models to minimize the residuals, thereby improving the overall prediction accuracy.