The software detects suspicious skin lesions in photos from smartphones

0
59


Melanoma, which accounts for more than 70% of all skin cancers, occurs when pigment-producing cells called melanocytes multiply uncontrollably. This cancer is usually diagnosed by visual inspection of suspicious pigmented lesions (SPL), and the early detection of lesions in a doctor’s office is usually life-saving. However, there are several disadvantages with this approach, including the high volume of potential lesions that must be biopsied and tested before confirming the diagnosis.

To overcome these problems, researchers at MIT and other Boston institutions have developed a new deep learning tool to more easily identify harmful injuries from photographs taken with a smartphone.

A broad field image of lesions in a patient with new deep learning techniques is classified to identify suspicious lesions. (MIT)

The paper, published in Science Translational Medicine, describes the development of the tool using a branch of artificial intelligence called deep convolutional neural networks (DCNN). The researchers formed their tool using more than 20,000 images, taken from 133 patients and from publicly available databases. It is important to note that the photographs were taken with different personal cameras, to ensure that it would work with real life examples.

Once the tool was formed using known examples, it demonstrated more than 90.3% sensitivity and 89.9% specificity in distinguishing SPL from non-suspicious lesions, skin, and complex history.

An interesting aspect that distinguishes this tool from others is based on the identification of lesions using the “ugly duckling” criterion. This method, currently used by dermatologists, assumes that most of an individual’s moles look similar to each other and are usually not suspicious, with moles of different appearance classified as “ugly ducklings” for further research.

Criteria for classifying lesions as suspicious or non-suspicious include their circularity, convexity, inertia, intensity, and size.

By training the system on different characteristics of the moles, such as circularity, size and intensity, the accuracy of the prediction was greatly improved: the algorithm matched the consensus of experienced dermatologists 88% of the time and matched the opinion of individual dermatologists 86% of the time. If the technology is confirmed, there could be significant savings in terms of clinical time and cost involved in imaging and analyzing individual lesions.

“Our research suggests that systems that take advantage of computer vision and deep neural networks, by quantifying these common signs, can achieve accuracy comparable to expert dermatologists,” Soenksen, the first author of the paper, said in a press release. from MIT. “We hope our research will revitalize the desire to perform more efficient dermatological testing in primary care settings to drive appropriate referrals.”

Paper a Scientific translational medicine: Use in-depth learning to dermatologically detect suspicious pigmented skin lesions from wide-field imaging

Via: WITH





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here