Kappa statistics, also known as Cohen's kappa, is a measure of agreement or concordance between two raters or methods. It is particularly useful in epidemiology for assessing the reliability of diagnostic tests, screening tools, or any other measurement where categorical outcomes are involved. Unlike simple percentage agreement, kappa statistics adjust for the agreement that could occur by chance.