Quantcast

Industry news that matters to you.  Learn more

Results of Highly Cited Biomarker-disease Associations Often Overestimated

Reading time: 2 – 4 minutes

Although new biomarkers are regularly proposed to serve as potential determinants of disease risk, prognosis or response to treatment, many markers only get evaluated in one or a small number of studies. A review in the June 1st edition of JAMA reports that the results of highly cited biomarker-disease associations that appear in major journals are often substantially overestimated, with effect sizes exceeding those found when the association is evaluated in larger studies.

To determine whether the magnitude of the effect sizes of biomarkers proposed in highly cited studies is accurate or overestimated, researchers at Stanford University School of Medicine searched ISI Web of Science and MEDLINE until December 2010 for biomarker studies that had a relative risk presented in their abstract, received more than 400 citations in the ISI Web of Science, and were published in any of 24 highly cited biomedical journals. MEDLINE was searched for subsequent meta-analyses on the same associations (same biomarker and same outcome).

The researchers focused on disease/outcome, biomarker under study, and first reported relative risk in the abstract. From each meta-analysis, they extracted the overall relative risk and the relative risk in the largest study. Data extraction was performed independently by two investigators.

In total, 35 highly cited associations were evaluated. Most studies (86%) had a stronger effect estimate than the largest study investigating the same association. In only 2 cases was the effect size estimate stronger in the largest than in the original, highly cited study.

For 29 of the 35 (83%) highly cited studies, the corresponding meta-analysis for the same association found a smaller effect estimate. Only 15 of the associations were statistically significant based on the largest studies, and of those only 7 had a relative risk point estimate greater than 1.37.

The authors conclude that the study results “should lead to reinforcing healthy skepticism about interpreting this literature” and that “the standards for claiming success should be higher.” Success standards should include prospective design, a careful analysis plan, meticulous reporting, extensive replication and validation of proposed biomarkers in large independent studies, and assessment of their incremental ability.

According to John Ioannidis, MD, DSc, chief of the Stanford Prevention Research Center, and lead author of the study:

Researchers tend to play with their data sets, and to analyze them in creative ways. We’re certainly not pointing out any one investigator with this study; it’s just the societal norm of science to operate in that fashion. But we need to follow the scientific method through to the end and demand replication and verification of results before accepting them as fact. We have to learn to trust the bigger picture. And it’s better to demand this proof upfront rather than waiting for it to happen on a case-by-case basis. It is vitally important to validate original published findings with subsequent large-scale evidence to make progress in the field of biomarkers and risk association.

Study: Comparison of Effect Sizes Associated With Biomarkers Reported in Highly Cited Individual Articles and in Subsequent Meta-analyses

PubMed: View abstract

Source: Stanford School of Medicine