Diagnostic test limitations

In my day job, especially on Fridays, I get asked about the reliability and accuracy of diagnostic tests. Part of my work involves serology or immunoassay, the interpretation of results and giving advice to referring medical practitioners on how to interpret the results they have in front of them.

Diagnostic tests are never perfect. False positive and false negative results occur. How much of a problem these false results may cause depends on the clinical context in which a test is used. This underlines the importance of the clinical or medical context in laboratory medicine, especially for pathologists whose job is to bridge the gap between the patient and the test tube. One of the most frustrating things for pathologists and medical laboratory scientists is the absence of relevant clinical information on referral forms.

In pathology truth can be defined as the presence or absence of a disease. The aim of the test is to determine the presence or absence of the disease being investigated. Unlike school tests, which are pass or fail, diagnostic tests are positive, negative or equivocal. In the context of serology however, we use the terms reactive, nonreactive and equivocal. I mention the passing and failing because I’ve read online some comments from patients who misunderstand the pathology results they’ve been given and use the terms pass and fail.

Disease present Disease absent
Test positive (reactive) True positive False positive
Test negative (nonreactive) False negative True negative

A chemical or agglutination reaction is observed, or not. “Reactive” is intended to remind people that serology is indirect and depends on the host’s immune response, not just analytical uncertainty of measurement. There are many known unknown variables.

A few serological results are reported as a “level” (e.g., anti-HBs or anti-Rubella IgG) which merit an interpretive comment. These are tightly validated enzyme-linked immunosorbent assays (ELISA), run with calibrators. Titrated tests like complement fixation tests (CFT) and agglutination tests give numbers which have limited meaning on their own and rely on seroconversion or 4-fold rise over time to make a diagnosis.

The term “Positive” is often used to denote a patient’s nominal status for a condition such as a chronic infection. For example, “HIV positive”, “CMV positive” (in transplant donor/recipient) and this is usually deduced from several test results, not just a single sample or technique. Likewise, “immune status” is sometimes reported, especially for immunisation preventable diseases. This is a bit elastic in special cases but gives a fair guide to whether immunisation is required.

A test result can be reactive or nonreactive (and in some cases equivocal), as a reactive result may be a nonspecific reaction. To call it ‘positive’ for that marker implies it is a true, and therefore a meaningful, positive. In particular IgM results can be nonspecific or crossreacting (sometimes due to polyclonal activation) and should not automatically be interpreted as indicating acute infection when they are reactive. When there is limited relevant clinical information in the patient referral, the pathologist or medical laboratory scientist can only report the result in a literal sense.

An interpretative comment is possible when there are relevant clinical details or other relevant pathology results.

Sensitivity and specificity

Sensitivity and specificity are characteristics of the test, while predictive values depend of the disease prevalence in the population being tested.

Often sensitivity and specificity of a test are inversely related.

Sensitivity = ability of a test to detect a true positive. Sensitivity = True positive/[True positive + False negative]

Specificity = ability of a test to exclude a true negative. Specificity = True negative/[True negative + False positive]

Predictive Values

Predictive values are of importance when a positive result does not automatically mean the presence of disease. Unlike sensitivity and specificity, the predictive value varies with the prevalence of the disease within the population. Even with a highly specific test, if the disease is uncommon among those tested, a large proportion of the positive results will be false positives and the positive predictive value will be low.

Positive predictive value = proportion of positive test that are true positives and represent the presence of disease. PPV = True positive/[True positives + False positives]

Negative predictive value = proportion of negative test that are true negatives and represent the absence of disease. NPV = True negative/[True negatives + False negatives]

A test with 90% sensitivity and specificity and a disease with 10% prevalence

Patients with disease Patients without disease All patients
Positive test 90 90 180
Negative test 10 810 820
Totals 100 900 1000

PPV = 90/[90 + 90] = 90/180 = 50%, NPV = 810/[810 + 10] = 810/820 = 98.7%, Which means 50% of the positive test results will be false positive results.

A test with 90% sensitivity and specificity and a disease with 1% prevalence

Patients with disease Patients without disease All patients
Positive test 9 99 108
Negative test 1 891 892
Totals 10 990 1000

PPV = 9/[9 + 99]  = 9/108 = 8.3%, NPV = 891/(891 + 1) = 891/892 = 99.9%, So 91 % of positive results will be false positive results

PPV and NPV for test with 90% sensitivity and specificity.

Prevalence PPV NPV
1% 8% >99%
10% 50% 99%
20% 69% 97%
50% 90% 90%

If the test is applied when the proportion of people who truly have the disease is high then the PPV improve.

Conversely, a very sensitive test (even one which is very specific) will have a large number of false positives if the prevalence of disease is low.


Sensitivity and specificity are intrinsic attributes of the test being evaluated (given similar patient and specimen characteristics), and are independent of the prevalence of disease in the population being tested.

Positive and negative predictive values are highly dependent on the population prevalence of the disease.

How can we use this to predict the presence or absence of disease in our patients?

We need to understand how the diagnostic test result influences the pretest probability of disease to the posttest probability of disease.

Likelihood ratios

The degree to which a test result modifies your pre-test probability of disease is expressed by the “likelihood ratio”.

The positive likelihood ratio is the chance of a positive test result in people with the disease, divided by the chance of a positive test result in people without the disease.

The negative likelihood ratio is the chance of a negative test result in people with the disease, divided by the chance of a negative test result in people without the disease.

Intuitive Assessments

Experienced clinicians may disagree on the interpretation of a diagnostic test result

This reasoning “makes explicit” the reasons for such disagreement:

  • Differing estimates of pretest probability?
  • Differing estimates of test performance?
  • Differing willingness to tolerate uncertainty?

Selecting the optimal balance of sensitivity and specificity depends on the purpose for which the test is going to be used. A screening test should be highly sensitive and a confirmatory test should be highly specific. In practice a test is either used for sensitivity or specificity.

What is the test for?

Test with high sensitivity are used to RULE OUT those without the disease. Tests with high specificity are used to RULE IN those with the disease.

Series testing

You can use the post-test probability of one test as the pre-test probability of the next test – Testing in Series. Diagnostic tests performed in series or sequence allows for orderly progression up or down the probability tree until you are happy with the diagnostic decision. The specificity is increased but the sensitivity falls.

Parallel testing

Often a battery of tests is requested at the same time – testing in parallel. Sensitivity is increased because a diagnosis is made when there is positive in either test. The result will be a high number of false positives because the specificity is reduced.

Series or Parallel

Sensitivity Specificity
A 0.8 0.6
B 0.9 0.9
A and B (series) 0.72 0.96
A or B (parallel) 0.98 0.54

Sensitivity Bias

Diagnostic tests are often studied in populations different from those to whom they are applied. If the study population is very “sick” the sensitivity may be higher than when the test is applied to a more “general” population, particularly when there is diagnostic uncertainty.

Specificity Bias

Specificity may be higher in a “healthy” population (low probability). When used in patients who are “sicker” (and for whom there is more diagnostic uncertainty) more false positive results are likely – specificity bias.

So what does this mean?

It means no test is perfect. It means the referring medical practitioner and the pathologist need to be aware of contextual factors like disease prevalence and other factors that influence pretest probability. Knowing that no test is perfect and that there are other variables that influence a result including interpretation and disease prevalence, every result should be considered very carefully for what it means for the individual patient.

Diagnostic test limitations – Google Docs

What does the word speciate mean?

Check out Words that peeve me at http://drgarylum.com/words-that-peeve-me/

Speciate and identify

In the clinical microbiology laboratory I hear people say they’re going to speciate a bacterium when they mean they’re going to characterise it to determine the bacterium’s identity. Speciation is the formation of a new and distinct species in the course of evolution.

Reactive versus positive serology

I was trying to explain to a public health physician why I prefer reactive rather than positive for serology results.

The problem with assays used to detect antibodies to antigens from pathogens is that the antibodies detected can be cross reacting and just detecting them doesn’t always mean the patient has the infection. This is especially true for assays looking for IgM antibodies. This is why serologists tend to prefer reporting results as reactive or non-reactive rather than positive or negative. While the result may be positive, it doesn’t mean the patient positively has the disease in question.

It’s an important distinction and one I should never forget.

Sticky tape as a diagnostic device

On Friday at the hospital I was asked to see a patient who had a referral to the practice for a sticky tape test. The specimen collection team weren’t familiar with the test.

Rather than go into details on the interaction I had on Friday I thought I’d let you know about how a humble piece of sticky (or scotch) tape can help make a diagnosis.


Enterobius vermicularis is better known as a pinworm. It causes Enterobiasis or pinworm infestation.

Pinworm usually affects children but can cause illness in adults especially institutionalised adults.

The disease manifests as itchiness around the anus and causes sleeplessness and restlessness.

The worm is transmitted via the faecal-oral route. Pinworm eggs get deposited on the skin around the anus and these get transferred to others, especially those who like to rim but more commonly as a result of poor toilet hygiene. It’s easy to understand why this is a disease common in childhood and why families become infested easily.


Worms emerge at the anal verge a few hours after falling asleep. To make a diagnosis it’s best to get the worms and eggs as soon as the patient wakes up before any bathing or bowel movement. On waking, a short strip of sticky tape is applied to the skin close to the patient’s anus and applied to a glass slide. The slide can be sent into a pathology laboratory where it will be stained and examined using a light microscope.

Pinworm infestation gets treated with over-the-counter worming medications.



Raw milk and why I’m not sure it’s worth the risk

A couple of weeks ago I wrote a piece on Yummy Lummy about a meal Bron and I shared at Jamie’s Italian Canberra. In the middle of the post I ranted a little about the use of raw milk in some cheese, viz., bocconcini. As a clinical microbiologist with an interest in food, food safety and public health I’ve always appreciated the balance between taste and desires of food advocates and the need for safety. In that regard, Australia excels when it comes to using risk management principles to minimise harm to Australians. On balance and with some exceptions, raw milk is not permitted for commercial use in Australia. You can find the regulations at FSANZ.

A couple of days ago ProMED-mail posted a piece on unpasteurised milk in the USA. The ProMED-mail piece provides food for thought and some great references to excellent articles. To be fair, these articles relate to the USA and as I’ve stated, in Australia our regulatory process is very good.

For my money, I’d prefer pasteurised milk to be used at the small cost of some loss of flavour. For the raw milk devotees, I’m sorry but while your enthusiasm is great, in the greater public good, I think Australia should continue to tightly regulate diary products.

Dangers of Raw Milk: Minnesota Study Documents 1 in 6 Become Ill
Nonpasteurized Dairy Products, Disease Outbreaks, and State Laws—United States, 1993–2006
Raw or heated cow milk consumption: Review of risks and benefits

What do you think when you see CBR?

When anyone says CBR or whenever I read CBR two things come to mind.

Chemical Biological Radiological and the code for Canberra airport.

I’ve been involved in Chemical Biological Radiological related work since about 2000. It’s one the main things I work on in my current role. I’ve had an interest in biological warfare and terrorism, that is, countermeasures and response planning even before 2000. As a young microbiologist before studying medicine I was fascinated by the use of microorganisms as weapons.

So when I read or hear CBR my mind turns to

Clinically I think of

Now that I’ve lived in Canberra since 2007, I’ve come to know the Canberra airport very well and its code is CBR.

So what do you think I think of when I see this

2013-12-06 08.34.59 AEDT It's even being used as wallpaper on ACT Government PC monitors.

2013-12-06 08.34.59 AEDT It’s even being used as wallpaper on ACT Government PC monitors.

I don’t think Confident Bold Ready. When I see the bright yellow background and the stylised letters it sends my mind back to the 1970s and 1980s to this

So rather than make me think of Canberra as confident bold ready, the logo makes me look to the past. Apparently the ACT Government has budgeted $2.6 million for this work. I’m expecting Molly Meldrum to materialise and start spruiking Canberra.

For me, CBR is always going to mean Chemical Biological Radiological or Canberra airport.