Abstract
Abstract
Background: Primary care providers, dermatology specialists, and health care access are key components of primary prevention, early diagnosis, and treatment of skin cancer. Artificial intelligence (AI) offers the promise of diagnostic support for nonspecialists, but real-world clinical validation of AI in primary care is lacking.
Objective: We aimed to (1) assess the reliability of an AI-based clinical triage algorithm in classifying benign and malignant skin lesions and (2) evaluate the quality of images obtained in primary care using the study camera (3Gen DermLite Cam v4 or similar).
Methods: This was a single-center, prospective, double-blinded observational study with a predetermined study design. We recruited participants with suspected skin cancer in 20 primary care practices who were referred for assessment via teledermatology. A second set of photographs taken using a standardized camera was processed by the AI algorithm. We evaluated the image quality and compared two teledermatologists’ diagnoses by consensus (the “gold standard”) with AI and histology where applicable.
Results: Our primary outcome assessment stratified 391 skin lesions by management as benign, uncertain, or malignant. Uncertain lesions were not included in the sensitivity and specificity analyses. Uncertain lesions included lesions that had either diagnostic or management uncertainties. For the remaining 242 lesions, the sensitivity was 97.26% (95% CI 93.13%-99.25%) and the specificity was 97.92% (95% CI 92.68%-99.75%). The AI algorithm was compared with the histological diagnoses for 123 lesions. The sensitivity was 100% (95% CI 95.85%-100%) and the specificity was 72.22% (95% CI 54.81%-85.80%).
Conclusions: The AI algorithm demonstrates encouraging results, with high sensitivity and specificity, concordant with previous AI studies. It shows potential as a triage tool in conjunction with teledermatology to augment health care and improve access to dermatology. Further real-life studies need to be conducted on a larger scale to assess the reliability, usability, and cost-effectiveness of the algorithm in primary care.
Acknowledgments: MoleMap NZ, who developed the AI algorithm, provided some funding for this study. HT's salary was partially sponsored by MoleMap NZ, who developed the AI algorithm. AB is a shareholder and consultant to Molemap Ltd provider of the AI algorithm.
Conflicts of Interest: None declared.
doi:10.2196/35395
Keywords
Edited by T Derrick; This is a non–peer-reviewed article. submitted 02.12.21; accepted 03.12.21; published 10.12.21
Copyright©Harmony Thompson, Amanda Oakley, Michael B Jameson, Adrian Bowling. Originally published in Iproceedings (https://www.iproc.org), 10.12.2021.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in Iproceedings, is properly cited. The complete bibliographic information, a link to the original publication on https://www.iproc.org/, as well as this copyright and license information must be included.