Model performance can be significantly affected by data imbalance. Common performance metrics like accuracy can be misleading in imbalanced datasets. For example, if 99% of the data are from the majority class, a model that always predicts the majority class will have 99% accuracy, but it will fail to identify any minority class instances. More balanced metrics like the F1-score, Precision, and Recall, as well as the Area Under the ROC Curve (AUC-ROC), are often more appropriate in these scenarios.