Artificial intelligence for enhanced diagnostic precision of prostate cancer

Authors

  • Agus Rizal Ardy Hariandy Hamid Department of Urology, Faculty of Medicine, Universitas Indonesia, Cipto Mangunkusumo Hospital, Jakarta, Indonesia
  • Agnes Stephanie Harahap Department of Anatomical Pathology, Faculty of Medicine, Universitas Indonesia, Cipto Mangunkusumo Hospital, Jakarta, Indonesia
  • Monik Ediana Miranda Department of Anatomical Pathology, Faculty of Medicine, Universitas Indonesia, Cipto Mangunkusumo Hospital, Jakarta, Indonesia
  • Kahlil Gibran Faculty of Medicine, Universitas Indonesia, Jakarta, Indonesia
  • Nabila Husna Shabrina Department of Computer Engineering, Universitas Multimedia Nusantara, Tangerang, Banten, Indonesia

DOI:

https://doi.org/10.13181/mji.oa.258312

Keywords:

artificial intelligence, computer-assisted diagnosis, computer-assisted image interpretation, pathology, prostatic neoplasms

Abstract

BACKGROUND Accurate diagnosis and grading of prostate cancer are essential for treatment planning. The role of artificial intelligence in prostate cancer intervention and diagnosis (RAPID) is a study aimed at developing artificial intelligence (AI) models to enhance diagnostic precision in prostate cancer by distinguishing malignant from non-cancerous histopathological findings.

METHODS Histopathological images were collected between 2023 and 2024 at the Department of Anatomical Pathology, Faculty of Medicine, Universitas Indonesia. The dataset included benign prostatic hyperplasia and prostate cancer cases. All slides were digitized and manually annotated by pathologists. Patch-based classification was performed using convolutional neural network and transformer-based models to differentiate malignant from non-malignant tissues.

RESULTS A total of 529 whole-slide images were processed, yielding 26,418 image patches for model training and testing. Deep learning models achieved strong performance in classification. Architectures including EfficientNetV2B0, Xception, ConvNeXt-Tiny, and Vision Transformer (ViT) achieved near-perfect classification outcomes. EfficientNetV2B0 reached an AUC of 1.00 (95% CI: 1.00–1.00), sensitivity 0.99 (95% CI: 0.99–1.00), and specificity 1.00 (95% CI: 1.00–1.00). Xception and ConvNeXt-Tiny both achieved AUC 1.00 (95% CI: 1.00–1.00) with sensitivity and specificity of 1.00 (95% CI: 1.00–1.00). ViT performed strongly with AUC 0.999 (95% CI: 0.99–1.00), sensitivity 0.99 (95% CI: 0.99–0.99), and specificity 0.99 (95% CI: 0.99–0.99).

CONCLUSIONS RAPID demonstrated high potential as an AI-based diagnostic tool for prostate cancer, showing excellent accuracy in histopathological classification using the Indonesian dataset. These findings highlight the feasibility of deploying deep learning models to support diagnostic decision-making in clinical practice.

Downloads

Download data is not yet available.

References

Ferlay J, Ervik M, Lam F, Laversanne M, Colombet M, Mery L, et al. Global cancer observatory: world [Internet]. International Agency for Research on Cancer; 2024. [cited 2025 Jul 8]. Available from: https://gco.iarc.who.int/media/globocan/factsheets/populations/900-world-fact-sheet.pdf.

Ferlay J, Ervik M, Lam F, Laversanne M, Colombet M, Mery L, et al. Global cancer observatory: prostate [Internet]. International Agency for Research on Cancer; 2024. [cited 2025 Jul 8]. Available from: https://gco.iarc.who.int/media/globocan/factsheets/cancers/27-prostate-fact-sheet.pdf.

Ferlay J, Ervik M, Lam F, Laversanne M, Colombet M, Mery L, et al. Global cancer observatory: Indonesia [Internet]. International Agency for Research on Cancer; 2024. [cited 2025 Jul 8]. Available from: https://gco.iarc.who.int/media/globocan/factsheets/populations/360-indonesia-fact-sheet.pdf.

Ng M, Leslie SW, Baradhi KM. Benign prostatic hyperplasia. [Update 2023 Jun 12] In: StatPearls [Internet]. Treasure Island: StatPearls Publishing; 2025. Available from: https://www.ncbi.nlm.nih.gov/books/NBK558964/.

Clark R, Vesprini D, Narod SA. The effect of age on prostate cancer survival. Cancers. 2022;14(17):4149. https://doi.org/10.3390/cancers14174149

Ozkan TA, Eruyar AT, Cebeci OO, Memik O, Ozcan L, Kuskonmaz I. Interobserver variability in Gleason histological grading of prostate cancer. Scand J Urol. 2016;50(6):420-4. https://doi.org/10.1080/21681805.2016.1206619

Oyama T, Allsbrook WC Jr, Kurokawa K, Matsuda H, Segawa A, Sano T, et al. A comparison of interobserver reproducibility of Gleason grading of prostatic carcinoma in Japan and the United States. Arch Pathol Lab Med. 2005;129(8):1004-10. https://doi.org/10.5858/2005-129-1004-ACOIRO

Pantanowitz L, Quiroga-Garza GM, Bien L, Heled R, Laifenfeld D, Linhart C, et al. An artificial intelligence algorithm for prostate cancer diagnosis in whole slide images of core needle biopsies: a blinded clinical validation and deployment study. Lancet Digit Health. 2020;2(8):e407-16. https://doi.org/10.1016/S2589-7500(20)30159-X

Goldenberg SL, Nir G, Salcudean SE. A new era: artificial intelligence and machine learning in prostate cancer. Nat Rev Urol. 2019;16(7):391-403. https://doi.org/10.1038/s41585-019-0193-3

Riaz IB, Harmon S, Chen Z, Naqvi SAA, Cheng L. Applications of artificial intelligence in prostate cancer care: a path to enhanced efficiency and outcomes. Am Soc Clin Oncol Educ Book. 2024;44(3):e438516. https://doi.org/10.1200/EDBK_438516

Ho YT, Dhalas RR, Zohair M, Deb S, Shoaib M, Elmer S, at al. Artificial intelligence in urology-a survey of urology healthcare providers. Soc Int Urol J. 2025;6(4):53. https://doi.org/10.3390/siuj6040053

von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, et al. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med. 2007;4(10):e296. https://doi.org/10.1371/journal.pmed.0040296

Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig L, et al. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. BMJ. 2015;351:h5527. https://doi.org/10.1136/bmj.h5527

Sounderajah V, Ashrafian H, Golub RM, Shetty S, De Fauw J, Hooft L, et al. Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol. BMJ Open. 2021;11(6):e047709. https://doi.org/10.1136/bmjopen-2020-047709

Maier-Hein L, Reinke A, Kozubek M, Martel AL, Arbel T, Eisenmann M, et al. BIAS: transparent reporting of biomedical image analysis challenges. Med Image Anal. 2020;66:101796. https://doi.org/10.1016/j.media.2020.101796

Netto GJ, Amin MB, Berney DM, Compérat EM, Gill AJ, Hartmann A, et al. The 2022 World Health Organization classification of tumors of the urinary system and male genital organs-Part B: prostate and urinary tract tumors. Eur Urol. 2022;82(5):469-82. https://doi.org/10.1016/j.eururo.2022.07.002

Wu W. patchify, version 0.2.3 [Internet]. Python Package Index; 2021. [cited 2025 Jul 8]. Available from: https://pypi.org/project/patchify/.

Otálora S, Marini N, Podareanu D, Hekster R, Tellez D, Van Der Laak J, et al. Stainlib: a python library for augmentation and normalization of histopathology H&E images. bioRxiv (Cold Spring Harbor Laboratory). 2022. https://doi.org/10.1101/2022.05.17.492245

Iglovikov VI. albumentations, version 2.0.8 [Internet]. Python Package Index; 2025 [cited 2025 Jul 8]. Available from: https://pypi.org/project/albumentations/.

Macenko M, Niethammer M, Marron JS, Borland D, Woosley JT, Guan X, et al. A method for normalizing histology slides for quantitative analysis. In: Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro; 2009 Jun 28-Jul 1; Boston, USA. Piscataway: IEEE; 2009. p. 1107-10. https://doi.org/10.1109/ISBI.2009.5193250

Sarvamangala DR, Kulkarni RV. Convolutional neural networks in medical image understanding: a survey. Evol Intell. 2022;15(1):1-22. https://doi.org/10.1007/s12065-020-00540-3

Atabansi CC, Nie J, Liu H, Song Q, Yan L, Zhou X. A survey of transformer applications for histopathological image analysis: new developments and future directions. Biomed Eng Online. 2023;22(1):96. https://doi.org/10.1186/s12938-023-01157-0

He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 Jun 27-30; Las Vegas, USA. Piscataway: IEEE; 2016. p. 770-8. https://doi.org/10.1109/CVPR.2016.90

Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 Jul 21-26; Honolulu, USA. Piscataway: IEEE; 2017. p. 2261-9. https://doi.org/10.1109/CVPR.2017.243

Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, et al. MobileNets: efficient convolutional neural networks for mobile vision applications [Internet]. arXiv preprint arXiv:1704.04861; 2017 [cited 2025 Jul 8]. Available from: https://arxiv.org/abs/1704.04861.

Tan M, Le QV. EfficientNet: rethinking model scaling for convolutional neural networks [Internet]. arXiv preprint arXiv:1905.11946; 2019 [cited 2025 Jul 8]. Available from: https://arxiv.org/abs/1905.11946.

Chollet F. Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 Jul 21-26; Honolulu, USA. Piscataway: IEEE; 2017. p. 1800-7. https://doi.org/10.1109/CVPR.2017.195

Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, et al. An image is worth 16x16 words: transformers for image recognition at scale [Internet]. arXiv preprint arXiv:2010.11929; 2020 [cited 2025 Jul 8]. Available from: https://arxiv.org/abs/2010.11929.

Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H. Training data-efficient image transformers & distillation through attention [Internet]. arXiv preprint arXiv:2012.12877; 2020 [cited 2025 Jul 8]. Available from: https://arxiv.org/abs/2012.12877.

Liu Z, Mao H, Wu CY, Feichtenhofer C, Darrell T, Xie S. A ConvNet for the 2020s. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2022 Jun 19-24; New Orleans, USA. Piscataway: IEEE; 2022. p. 11966-76. https://doi.org/10.1109/CVPR52688.2022.01167

Huang TL, Lu NH, Huang YH, Twan WH, Yeh LR, Liu KY, et al. Transfer learning with CNNs for efficient prostate cancer and BPH detection in transrectal ultrasound images. Sci Rep. 2023;13(1):21849. https://doi.org/10.1038/s41598-023-49159-1

Lin T, Wang Y, Liu X, Qiu X. A survey of transformers. AI Open. 2022;3:111-32. https://doi.org/10.1016/j.aiopen.2022.10.001

Xu H, Xu Q, Cong F, Kang J, Han C, Liu Z, et al. Vision transformers for computational histopathology. IEEE Rev Biomed Eng. 2024;17:63-79. https://doi.org/10.1109/RBME.2023.3297604

Bulten W, Pinckaers H, van Boven H, Vink R, de Bel T, van Ginneken B, et al. Automated deep-learning system for Gleason grading of prostate cancer using biopsies: a diagnostic study. Lancet Oncol. 2020;21(2):233-41. https://doi.org/10.1016/S1470-2045(19)30739-9

Otálora S, Marini N, Müller H, Atzori M. Combining weakly and strongly supervised learning improves strong supervision in Gleason pattern classification. BMC Med Imaging. 2021;21(1):77. https://doi.org/10.1186/s12880-021-00609-0

Chaurasia AK, Harris HC, Toohey PW, Hewitt AW. A generalised vision transformer-based self-supervised model for diagnosing and grading prostate cancer using histological images. Prostate Cancer Prostatic Dis. 2025;28(1):1-9. https://doi.org/10.1038/s41391-025-00957-w

Campanella G, Hanna MG, Geneslaw L, Miraflor A, Werneck Krauss Silva V, Busam KJ, et al. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat Med. 2019;25(8):1301-9. https://doi.org/10.1038/s41591-019-0508-1

Bulten W, Kartasalo K, Chen PC, Ström P, Pinckaers H, Nagpal K, et al. PANDA challenge consortium. Artificial intelligence for diagnosis and Gleason grading of prostate cancer: the PANDA challenge. Nat Med. 2022;28(1):154-63. https://doi.org/10.1038/s41591-021-01620-2

Steiner DF, MacDonald R, Liu Y, Truszkowski P, Hipp JD, Gammage C, et al. Impact of deep learning assistance on the histopathologic review of lymph nodes for metastatic breast cancer. Am J Surg Pathol. 2018;42(12):1636-46. https://doi.org/10.1097/PAS.0000000000001151

Wang X, Yang S, Zhang J, Wang M, Zhang J, Huang J, et al. TransPath: Transformer-based self-supervised learning for histopathological image classification. In: de Bruijne M, et al, editors. Medical Image Computing and Computer Assisted Intervention - MICCAI 2021. Lecture Notes in Computer Science. Cham: Springer; 2021. p. 186-96. https://doi.org/10.1007/978-3-030-87237-3_18

Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: A large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2009 Jun 20-25; Miami, USA. Piscataway: IEEE; 2009. p. 248-55. https://doi.org/10.1109/CVPR.2009.5206848

Published

2025-09-30

How to Cite

1.
Hamid ARAH, Harahap AS, Miranda ME, Gibran K, Shabrina NH. Artificial intelligence for enhanced diagnostic precision of prostate cancer. Med J Indones [Internet]. 2025 Sep. 30 [cited 2025 Oct. 10];34(3):189-200. Available from: https://mji.ui.ac.id/journal/index.php/mji/article/view/8312

Issue

Section

Clinical Research

Most read articles by the same author(s)

<< < 1 2