Alternative Method of Estimating False Rate of Diagnostic Screening Test for a Condition in a Population
Authors
Department of Statistics, Faculty of Physical Sciences Nnamdi Azikiwe University, Awka (Nigeria)
Department of Statistics, Faculty of Physical Sciences Nnamdi Azikiwe University, Awka (Nigeria)
Article Information
DOI: 10.47772/IJRISS.2026.100300594
Subject Category: Statistics
Volume/Issue: 10/3 | Page No: 8183-8199
Publication Timeline
Submitted: 2026-03-26
Accepted: 2026-03-31
Published: 2026-04-21
Abstract
Diagnostic screening tests are essential tools in clinical medicine and epidemiology for detecting the presence or absence of a disease condition. Their quality is conventionally assessed using Sensitivity (Se), Specificity (Sp), False Positive Rate (FPR), False Negative Rate (FNR), True Positive Rate (TPR), and True Negative Rate (TNR). A critical and often overlooked distinction is that Se and Sp are conditional probabilities given the true disease state, whereas FPR, FNR, TPR, and TNR as used in practice are conditional probabilities given the observed test result. Standard estimation of the latter group requires prior knowledge of the population prevalence rate, data that are frequently unavailable, particularly in developing nations.
This paper proposes, develops, and illustrates a novel statistical method for estimating all of the above indices using only directly observable cell frequencies from a 2×2 contingency table of screening results, without requiring the population prevalence rate. The method introduces a concordance index ω that measures the net relative difference between concordant and discordant test outcomes and derives closed-form estimators with established theoretical properties, including exact expressions for their asymptotic standard errors and 95% confidence intervals via the delta method. A simulation study across varying sample sizes (n = 50, 100, 200, 500) and prevalence levels confirms that the estimators are nearly unbiased, converge rapidly, and maintain nominal confidence interval coverage. Applied to a real prostate cancer screening dataset (n = 135), the method yields Se = 33.33%, Sp = 97.44%, FPR = 33.33%, TPR = 66.67%, FNR = 9.52%, and TNR = 90.48%. Comparison with the traditional Bayesian prevalence dependent method confirms the practical superiority of the proposed approach in low prevalence and data scarce settings, while also clarifying the scenarios in which the Bayesian approach remains indispensable.
Keywords
Diagnostic screening test, sensitivity, specificity, false positive rate
Downloads
References
1. Altman, D. G., & Bland, J. M. (1994a). Diagnostic tests 1: Sensitivity and specificity. British Medical Journal, 308(6943), 1552. https://doi.org/10.1136/bmj.308.6943.1552 [Google Scholar] [Crossref]
2. Altman, D. G., & Bland, J. M. (1994b). Diagnostic tests 2: Predictive values. British Medical Journal, 309(6947), 102. https://doi.org/10.1136/bmj.309.6947.102 [Google Scholar] [Crossref]
3. Black, W. C., & Craig, H. A. (2002). Prostate-specific antigen testing for early prostate cancer detection: Problems with the current evidence. Journal of the National Cancer Institute, 94(24), 1851–1859. https://doi.org/10.1093/jnci/94.24.1851 [Google Scholar] [Crossref]
4. Brenner, H., & Gefeller, O. (1997). Variation of sensitivity, specificity, likelihood ratios and predictive values with disease prevalence. Statistics in Medicine, 16(9), 981–991. https://doi.org/10.1002/(SICI)1097-0258(19970515)16:9<981::AID-SIM510>3.0.CO;2-N [Google Scholar] [Crossref]
5. DeLong, E. R., DeLong, D. M., & Clarke-Pearson, D. L. (1988). Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach. Biometrics, 44(3), 837–845. https://doi.org/10.2307/2531595 [Google Scholar] [Crossref]
6. Dendukuri, N., & Joseph, L. (2001). Bayesian approaches to modeling the conditional dependence between multiple diagnostic tests. Biometrics, 57(1), 158–167. https://doi.org/10.1111/j.0006-341X.2001.00158.x [Google Scholar] [Crossref]
7. Fagan, T. J. (1975). Nomogram for Bayes theorem. New England Journal of Medicine, 293(5), 257. https://doi.org/10.1056/NEJM197507312930513 [Google Scholar] [Crossref]
8. Glas, A. S., Lijmer, J. G., Prins, M. H., Bonsel, G. J., & Bossuyt, P. M. M. (2003). The diagnostic odds ratio: A single indicator of test performance. Journal of Clinical Epidemiology, 56(11), 1129–1135. https://doi.org/10.1016/S0895-4356(03)00177-X [Google Scholar] [Crossref]
9. Hajian-Tilaki, K. (2013). Receiver operating characteristic (ROC) curve analysis for medical diagnostic test evaluation. Caspian Journal of Internal Medicine, 4(2), 627–635. [Google Scholar] [Crossref]
10. Hanley, J. A., & McNeil, B. J. (1982). The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143(1), 29–36. https://doi.org/10.1148/radiology.143.1.7063747 [Google Scholar] [Crossref]
11. Hanley, J. A., & McNeil, B. J. (1983). A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology, 148(3), 839–843. https://doi.org/10.1148/radiology.148.3.6867347 [Google Scholar] [Crossref]
12. Harbord, R. M., Deeks, J. J., Egger, M., Whiting, P., & Sterne, J. A. C. (2007). A unification of models for meta-analysis of diagnostic accuracy studies. Biostatistics, 8(2), 239–251. https://doi.org/10.1093/biostatistics/kxl004 [Google Scholar] [Crossref]
13. Hui, S. L., & Walter, S. D. (1980). Estimating the error rates of diagnostic tests. Biometrics, 36(1), 167–171. https://doi.org/10.2307/2530508 [Google Scholar] [Crossref]
14. Joseph, L., Gyorkos, T. W., & Coupal, L. (1995). Bayesian estimation of disease prevalence and the parameters of diagnostic tests in the absence of a gold standard. American Journal of Epidemiology, 141(3), 263–272. https://doi.org/10.1093/oxfordjournals.aje.a117428 [Google Scholar] [Crossref]
15. Knottnerus, J. A., & Muris, J. W. M. (2003). Assessment of the accuracy of diagnostic tests: The cross-sectional study. Journal of Clinical Epidemiology, 56(11), 1118–1128. https://doi.org/10.1016/S0895-4356(03)00206-3 [Google Scholar] [Crossref]
16. Lalkhen, A. G., & McCluskey, A. (2008). Clinical tests: Sensitivity and specificity. Continuing Education in Anaesthesia, Critical Care & Pain, 8(6), 221–223. https://doi.org/10.1093/bjaceaccp/mkn041 [Google Scholar] [Crossref]
17. Lusted, L. B. (1971). Signal detectability and medical decision-making. Science, 171(3977), 1217–1219. https://doi.org/10.1126/science.171.3977.1217 [Google Scholar] [Crossref]
18. Metz, C. E. (1978). Basic principles of ROC analysis. Seminars in Nuclear Medicine, 8(4), 283–298. https://doi.org/10.1016/S0001-2998(78)80014-2 [Google Scholar] [Crossref]
19. Murtagh, F. E. M., Zaman, M., & Marshall, D. C. (2007). Diagnostic tests and decision-making in resource-limited settings. Bulletin of the World Health Organization, 85(4), 321–328. https://doi.org/10.2471/BLT.06.034512 [Google Scholar] [Crossref]
20. Obuchowski, N. A. (2003). Receiver operating characteristic curves and their use in radiology. Radiology, 229(1), 3–8. https://doi.org/10.1148/radiol.2291010898 [Google Scholar] [Crossref]
21. Pepe, M. S. (2003). The statistical evaluation of medical tests for classification and prediction. Oxford University Press. [Google Scholar] [Crossref]
22. Pepe, M. S., & Janes, H. (2007). Insights into latent class analysis of diagnostic test performance. Biostatistics, 8(2), 474–484. https://doi.org/10.1093/biostatistics/kxl038 [Google Scholar] [Crossref]
23. Ransohoff, D. F., & Feinstein, A. R. (1978). Problems of spectrum and bias in evaluating the efficacy of diagnostic tests. New England Journal of Medicine, 299(17), 926–930. https://doi.org/10.1056/NEJM197810262991705 [Google Scholar] [Crossref]
24. Reitsma, J. B., Glas, A. S., Rutjes, A. W. S., Scholten, R. J. P. M., Bossuyt, P. M., & Zwinderman, A. H. (2005). Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews. Journal of Clinical Epidemiology, 58(10), 982–990. https://doi.org/10.1016/j.jclinepi.2005.02.022 [Google Scholar] [Crossref]
25. Sackett, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (2000). Evidence-based medicine: How to practice and teach EBM (2nd ed.). Churchill Livingstone. [Google Scholar] [Crossref]
26. Swets, J. A. (1988). Measuring the accuracy of diagnostic systems. Science, 240(4857), 1285–1293. https://doi.org/10.1126/science.3287615 [Google Scholar] [Crossref]
27. Vacek, P. M. (1985). The effect of conditional dependence on the evaluation of diagnostic tests. Biometrics, 41(4), 959–968. https://doi.org/10.2307/2530967 [Google Scholar] [Crossref]
28. Walter, S. D., & Irwig, L. M. (1988). Estimation of test error rates, disease prevalence and relative risk from misclassified data: A review. Journal of Clinical Epidemiology, 41(9), 923–937. https://doi.org/10.1016/0895-4356(88)90110-2 [Google Scholar] [Crossref]
29. Yerushalmy, J. (1947). Statistical problems in assessing methods of medical diagnosis, with special reference to X-ray techniques. Public Health Reports, 62(40), 1432–1449. https://doi.org/10.2307/4586294 [Google Scholar] [Crossref]
30. Zhou, X. H., Obuchowski, N. A., & McClish, D. K. (2002). Statistical methods in diagnostic medicine. Wiley-Interscience. https://doi.org/10.1002/0471462195 [Google Scholar] [Crossref]
31. Zou, K. H., O’Malley, A. J., & Mauri, L. (2007). Receiver-operating characteristic analysis for evaluating diagnostic tests and predictive models. Circulation, 115(5), 654–657. https://doi.org/10.1161/CIRCULATIONAHA.105.594929 [Google Scholar] [Crossref]
Metrics
Views & Downloads
Similar Articles
- The Net Relative Run-Ratio Method (NRRR), a Foolproof Technique to Replace the Net Run Rate (NRR) Method in Evaluating the Authority of Match-Wins
- Statistical Role of CB-SEM Vs PLS-SEM in the Field of Social Science
- Predictive Modelling and Statistical Analysis of Housing Prices in Lagos State, Nigeria
- Collocational Patterns of Guru in American Business vs. Spiritual Discourse
- A Comparative Analysis of Heuristic and Dynamic Algorithms for Route Optimization in Johor’s Delivery Hubs