• Decrease font size
  • Return font size to normal
  • Increase font size
U.S. Department of Health and Human Services

Scientific Publications by FDA Staff

  • Print
  • Share
  • E-mail
-

Search Publications



Fields



Centers











Starting Date


Ending Date


Order by

Entry Details

BMC Med Res Methodol 2013 Jul 29;13:98

On the assessment of the added value of new predictive biomarkers.

Chen W, Samuelson FW, Gallas BD, Kang L, Sahiner B, Petrick N

Abstract

BACKGROUND: The surge in biomarker development calls for research on statistical evaluation methodology to rigorously assess emerging biomarkers and classification models. Recently, several authors reported the puzzling observation that, in assessing the added value of new biomarkers to existing ones in a logistic regression model, statistical significance of new predictor variables does not necessarily translate into a statistically significant increase in the area under the ROC curve (AUC). Vickers et al. concluded that this inconsistency is because AUC "has vastly inferior statistical properties," i.e., it is extremely conservative. This statement is based on simulations that misuse the DeLong et al. method. Our purpose is to provide a fair comparison of the likelihood ratio (LR) test and the Wald test versus diagnostic accuracy (AUC) tests. DISCUSSION: We present a test to compare ideal AUCs of nested linear discriminant functions via an F test. We compare it with the LR test and the Wald test for the logistic regression model. The null hypotheses of these three tests are equivalent; however, the F test is an exact test whereas the LR test and the Wald test are asymptotic tests. Our simulation shows that the F test has the nominal type I error even with a small sample size. Our results also indicate that the LR test and the Wald test have inflated type I errors when the sample size is small, while the type I error converges to the nominal value asymptotically with increasing sample size as expected. We further show that the DeLong et al. method tests a different hypothesis and has the nominal type I error when it is used within its designed scope. Finally, we summarize the pros and cons of all four methods we consider in this paper. SUMMARY: We show that there is nothing inherently less powerful or disagreeable about ROC analysis for showing the usefulness of new biomarkers or characterizing the performance of classification models. Each statistical method for assessing biomarkers and classification models has its own strengths and weaknesses. Investigators need to choose methods based on the assessment purpose, the biomarker development phase at which the assessment is being performed, the available patient data, and the validity of assumptions behind the methodologies.


Category: Journal Article
PubMed ID: #23895587 DOI: 10.1186/1471-2288-13-98
Includes FDA Authors from Scientific Area(s): Medical Devices
Entry Created: 2013-08-02 Entry Last Modified: 2014-11-18
Feedback
-
-