The kappa statistic is a measure of inter-rater agreement for categorical items. It is used to determine the level of agreement between two or more raters who each classify items into mutually exclusive categories. Unlike simple percent agreement calculations, kappa takes into account the agreement occurring by chance.