Diagnostic and screening tests

From Ganfyd

(Redirected from Likelihood ratio)
Jump to: navigation, search

(Editorial comment.) This page and Interpreting test results are very similar. Neither discusses in detail prior and posterior probabilities as perhaps they should, or explains clearly why e.g. a rare condition needs a much more sensitive screening test. The two pages could be improved, and perhaps merged.

Contents

Two by two table

Info bulb.pngBy convention, the real status, gold standard or reference test is at the top, while the diagnostic test being assessed is down the side.
Two by two table for diagnostic and screening tests
Disease present
Test +ve Yes No
Yes a b
No c d
  • a represents the number of true positives - people who have a positive result and have disease.
  • d represents the number of true negatives - people who have a negative result and do not have disease.
  • b represents false positives - people who have a positive result but do not have disease.
  • c represents the number of false negatives - people who have a negative result but do actually have disease.

Sensitivity

= True positive rate (where positive in disease, and population is all those with positive tests)
= No of true positives that are detected/¬Total number with disease
= Chance of +ve test, given +ve disease
= a / (a + c)

Answers the question “how good is this test at picking up people who have the condition”

Specificity

= True negative rate (where negative in health, and population is all those with negative tests)
= No of true negatives that are detected/¬Total number with disease
= chance of negative test, given no disease
= No of true ves / No of people without the disease
= d / (b + d)

Answers the question “how good is this test at correctly excluding people without the condition”

Positive predictive value

= chances of having the disease, given that the test is +ve (post test probability of a positive test)
= No with +ve test and disease / all people with +ve test
= a / (a + b)

Answers the question “if a person tests positive, what is the probablility that he or she has the condition?”

For a common condition, the positive predictive value may be high with a relatively insensitive test. For a rare condition, however - where the prior probability of disease is very low, even a highly sensitive test may have a relatively low positive predictive value, and yield many false positive results.

-ve predictive value

= chances of not having the disease, given that the test is –ve (post test probability of a positive test)
= No with ve test and no disease / all people with ve test
= d / (c + d)

Answers the question “if a person tests positive, what is the probablility that he or she has the condition?”

Accuracy

= True positives and true negatives of a test as a proportion of all results.
= \frac{a+d}{(a+b+c+d)}

Answers the question “what proportion of all tests have given the correct result?”

Likelihood ratio of a positive result

= \frac{1-Sensitivity}{Specificity}

Answers the question “How much more likely is a positive test to be found in a person with the condition than in a person without it?” (Likelihood ratios are now considered the most important findings).

If prior probability of disease known for an individual patient, then posterior probability of diseasecan be calculated using a nomogram developed by Sackett and colleagues.

Test may not prove presence or absence of disease, but can give more accurate probability of presence or absence of disease.

Likelihood ratio of a negative result

= \frac{Sensitivity}{1-Specificity}

Answers the question “How much more likely is a negative test to be found in a person without the condition than in a person with it?” (Likelihood ratios are now considered the most important findings).

Questions to ask yourself

Greenhalgh suggests asking yourself the following questions when reading a paper about a diagnostic or screening test:[1] PMID:

  • Is this test potentially relevant to my practice?
  • Has the test been compared with a true gold standard?
  • Did the validation study include and appropriate spectrum of subjects? (Affects PPV, NPV, LRs, if not sensitivity and specificity).
  • Has workup bias been avoided? (In some studies only those positive on study test are also given gold standard test.)
  • Has expectation bias been avoided? (i.e. was the person interpreting the test blinded – e.g. if looking at ECG, does the person interpreting it know the patient’s history?)
  • Was the test shown to be reproducible?
  • What are the features of the test as derived from this validation study?
  • Were confidence intervals given?
  • Has a sensible “normal range” been derived?
  • Has this test been placed in the context of other potential tests in the diagnostic sequence?

Worked example

Result of glucose tolerance (gold standard) test (gold standard) test
Result of urine test for glucose Diabetes positive (n=27) Diabetes negative (n=973)
Glucose present (n=13) True positive (n=6) False positive (n=7)
Glucose absent (n=13) False negative (n=21) True negative (n=966)

Features of worked example

Feature Formula Data (from worked example) Value
Sensitivity a/(a+c) 6/27 22.2%
Specificity d/(b+d) 966/973 99.3%
PPV a/(a+b) 6/13 46.2%
NPV d/(c+d) 966/973 97.8%
Accuracy (a+d)/(a+b+c+d) 972/1000 97.2%
Likelihood ratios
Positive test Sensitivity/(1 – specificity) 22.2/(1-99.3) = 22.2/0.7 32
Negative test (1 – sensitivity)/specificity (1-22.2)/99.3 = 77.8/99.3 0.78

If an individual patient has a fairly high chance of having diabetes (clinical history, age, obesity…) – say 40% – then a positive test almost certainly confirms the diagnosis (probability about 97% from the nomogram); whereas a negative makes them only a little less likely about 40% from the nomogram).

See also

Disease and exposure.

External links

References