Statistical tests for agreement
This is a sub-page of Medical statistics.
The kappa (κ) test is a test of agreement - e.g. between experts, sphygmomanometers. The resulting κ value is also known as Cohen's kappa co-efficient, named after the Jacob Cohen, the original author.
For example, radiologists’ assessment of CXRs:
|Radiologist A||Normal||Benign|| Suspected
The actual agreements were 21, 17, 15, and 1: total 54 out of 85 = 0.64 (64%) of films. As a simple percentage, there is 64% agreement.
However, the kappa calculation also takes into account expected agreements by chance. This is seen as a weakness of the test as it produces a more conservative measure of agreement. The chance agreement is calculated for each category. By way of explanation, radiologist A has classified 33 out of 85 as normal and radiologist B classified 28 out 85 as normal. By chance alone, the probability of a normal result by radiologist A is 33/85=0.39 and for radiologist B 28/85=0.33. The chances of agreement by chance are these two probabilities multiplied, i.e. 0.39*0.33=0.128. This calculation is done for each category and added together at the end.
For ease of calculation, it is sometimes easier to calculate this figure using [row total] * [column total] / [grand total], adding each value and dividing by the grand total only at the end. This figure represents the expected frequency of agreement by chance, i.e. for the normal category, 10.87 cases are expected to agree by chance.
|Normal||= 33*28/85||= 10.87|
|Benign||= 22*38/85||= 9.84|
|Suspected cancer||= 29*16/85||= 5.46|
|Cancer||= 1*3/85||= 0.04|
26.20/85 = 0.31: agreement by chance would be expected in 31% of the films. The maximum possible agreement is 1. So the radiologists scope for doing better than by chance is 1.00 - 0.31.
For more observers, the process is repeated for each observer and category.
In mathematical terms, if there are n observations in g categories, then the observed proportional agreement is given by
- (where fii = the number of agreements for category I)
The expected proportion of agreements by chance is given by:
- (where ri and ci are the row and column totals for category for the ith category)
The index of agreement, kappa, is given by:
- κ = (po-pe)/(1-pe)
- where po = probability or proportion expected by chance and pe = probability or proportion observed.
Interpretation of kappa:
Agreement is generally split into categories. Of note, the cut-offs are arbitrary with no particular evidence behind them.
|Value of κ||Strength of agreement|
|0.21 - 0.40||Fair|
|0.81-1.00||Very good/Almost perfect|
Confidence intervals and standard error for kappa
C.I.s, standard error can be calculated for κ, but their use is limited.
The approximate standard error of κ is:
Weighted kappa is obtained by giving weights to the frequencies in each cell of the table according to their distance from the diagonal that indicates agreement. This approach is helpful where there categories are in some order, e.g. in the example above, although 'suspected cancer' and 'cancer' are likely to result in similar management, they are counted as disagreements in the same way as a disagreement between 'normal' and 'cancer' would be counted, whereas if weighted, the disagreement is considered to be smaller.
There are two ways of weighting: linear and quadratic. Linear weighting is preferable where moving from any of the categories to another counts as a significant step, e.g. a difference in category 1 and 2 is equally significant as a difference in category 3 and 4. The quadratic approach is better where the distinction between categories at either ends are less significant (e.g. clear fail, borderline fail, borderline pass, pass, good pass, excellent pass, distinction).
For details see a text book such as Practical Statistics for Medical Research by Douglas G Altman (published by Chapman & Hall).
- ↑ Cohen, J. A coefficient of agreement for nominal scales. Educational and Psychological Measurement 1960;20(1):37–46.
- ↑ Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977 Mar; 33(1):159-74.
- ↑ Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychological bulletin. 1968 Oct; 70(4):213-20.