Evaluation of risk prediction models involves several key metrics and methodologies:
1. Discrimination: This measures the model's ability to differentiate between those who will experience the event and those who will not. Common metrics include the C-statistic or Area Under the Receiver Operating Characteristic Curve (AUC).
2. Calibration: This assesses the agreement between predicted probabilities and observed outcomes. Tools like the calibration plot and Hosmer-Lemeshow test are often used.
3. Reclassification: This method evaluates the improvement in prediction offered by a new model compared to an existing one. The Net Reclassification Improvement (NRI) and Integrated Discrimination Improvement (IDI) are frequently employed.