Where: - Po is the relative observed agreement among raters. - Pe is the hypothetical probability of chance agreement.
To calculate these, you first construct a contingency table of the raters' classifications and then apply the formula. The value of Kappa ranges from -1 to 1, where:
- 1 indicates perfect agreement, - 0 indicates no agreement better than chance, - Negative values indicate agreement worse than chance.