Automated system identifies dense tissue, a risk factor for breast cancer, in mammograms

  • 0 Replies
  • 912 Views
*

Tyler

  • Trusty Member
  • *********************
  • Deep Thought
  • *
  • 5273
  • Digital Girl
Automated system identifies dense tissue, a risk factor for breast cancer, in mammograms
16 October 2018, 4:09 pm

Researchers from MIT and Massachusetts General Hospital have developed an automated model that assesses dense breast tissue in mammograms — which is an independent risk factor for breast cancer — as reliably as expert radiologists.

This marks the first time a deep-learning model of its kind has successfully been used in a clinic on real patients, according to the researchers. With broad implementation, the researchers hope the model can help bring greater reliability to breast density assessments across the nation.

It’s estimated that more than 40 percent of U.S. women have dense breast tissue, which alone increases the risk of breast cancer. Moreover, dense tissue can mask cancers on the mammogram, making screening more difficult. As a result, 30 U.S. states mandate that women must be notified if their mammograms indicate they have dense breasts.

But breast density assessments rely on subjective human assessment. Due to many factors, results vary — sometimes dramatically — across radiologists. The MIT and MGH researchers trained a deep-learning model on tens of thousands of high-quality digital mammograms to learn to distinguish different types of breast tissue, from fatty to extremely dense, based on expert assessments. Given a new mammogram, the model can then identify a density measurement that closely aligns with expert opinion.

“Breast density is an independent risk factor that drives how we communicate with women about their cancer risk. Our motivation was to create an accurate and consistent tool, that can be shared and used across health care systems,” says Adam Yala, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and second author on a paper describing the model that was published today in Radiology.

The other co-authors are first author Constance Lehman, professor of radiology at Harvard Medical School and the director of breast imaging at the MGH; and senior author Regina Barzilay, the Delta Electronics Professor at CSAIL and the Department of Electrical Engineering and Computer Science at MIT and a member of the Koch Institute for Integrative Cancer Research at MIT.

Mapping density

The model is built on a convolutional neural network (CNN), which is also used for computer vision tasks. The researchers trained and tested their model on a dataset of more than 58,000 randomly selected mammograms from more than 39,000 women screened between 2009 and 2011. For training, they used around 41,000 mammograms and, for testing, about 8,600 mammograms.

Each mammogram in the dataset has a standard Breast Imaging Reporting and Data System (BI-RADS) breast density rating in four categories: fatty, scattered (scattered density), heterogeneous (mostly dense), and dense. In both training and testing mammograms, about 40 percent were assessed as heterogeneous and dense.

During the training process, the model is given random mammograms to analyze. It learns to map the mammogram with expert radiologist density ratings. Dense breasts, for instance, contain glandular and fibrous connective tissue, which appear as compact networks of thick white lines and solid white patches. Fatty tissue networks appear much thinner, with gray area throughout. In testing, the model observes new mammograms and predicts the most likely density category.

Matching assessments

The model was implemented at the breast imaging division at MGH. In a traditional workflow, when a mammogram is taken, it’s sent to a workstation for a radiologist to assess. The researchers’ model is installed in a separate machine that intercepts the scans before it reaches the radiologist, and assigns each mammogram a density rating. When radiologists pull up a scan at their workstations, they’ll see the model’s assigned rating, which they then accept or reject.

“It takes less than a second per image … [and it can be] easily and cheaply scaled throughout hospitals.” Yala says.

On over 10,000 mammograms at MGH from January to May of this year, the model achieved 94 percent agreement among the hospital’s radiologists in a binary test — determining whether breasts were either heterogeneous and dense, or fatty and scattered. Across all four BI-RADS categories, it matched radiologists’ assessments at 90 percent. “MGH is a top breast imaging center with high inter-radiologist agreement, and this high quality dataset enabled us to develop a strong model,” Yala says.

In general testing using the original dataset, the model matched the original human expert interpretations at 77 percent across four BI-RADS categories and, in binary tests, matched the interpretations at 87 percent.

In comparison with traditional prediction models, the researchers used a metric called a kappa score, where 1 indicates that predictions agree every time, and anything lower indicates fewer instances of agreements. Kappa scores for commercially available automatic density-assessment models score a maximum of about 0.6. In the clinical application, the researchers’ model scored 0.85 kappa score and, in testing, scored a 0.67. This means the model makes better predictions than traditional models.

In an additional experiment, the researchers tested the model’s agreement with consensus from five MGH radiologists from 500 random test mammograms. The radiologists assigned breast density to the mammograms without knowledge of the original assessment, or their peers’ or the model’s assessments. In this experiment, the model achieved a kappa score of 0.78 with the radiologist consensus.

Next, the researchers aim to scale the model into other hospitals. “Building on this translational experience, we will explore how to transition machine-learning algorithms developed at MIT into clinic benefiting millions of patients,” Barzilay says. “This is a charter of the new center at MIT — the Abdul Latif Jameel Clinic for Machine Learning in Health at MIT — that was recently launched. And we are excited about new opportunities opened up by this center.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

 


96 TFlop and 64GB
by frankinstien (General Project Discussion)
December 04, 2024, 09:19:41 pm
Project Acuitas
by WriterOfMinds (General Project Discussion)
November 30, 2024, 01:32:29 am
Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

423 Guests, 0 Users

Most Online Today: 443. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles