Using AI to predict breast cancer and personalize care

  • 0 Replies
  • 929 Views
*

Tyler

  • Trusty Member
  • *********************
  • Deep Thought
  • *
  • 5273
  • Digital Girl
Using AI to predict breast cancer and personalize care
« on: May 10, 2019, 12:00:23 pm »
Using AI to predict breast cancer and personalize care
7 May 2019, 3:00 pm

Despite major advances in genetics and modern imaging, the diagnosis catches most breast cancer patients by surprise. For some, it comes too late. Later diagnosis means aggressive treatments, uncertain outcomes, and more medical expenses. As a result, identifying patients has been a central pillar of breast cancer research and effective early detection.

With that in mind, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Massachusetts General Hospital (MGH) has created a new deep-learning model that can predict from a mammogram if a patient is likely to develop breast cancer as much as five years in the future. Trained on mammograms and known outcomes from over 60,000 MGH patients, the model learned the subtle patterns in breast tissue that are precursors to malignant tumors.

MIT Professor Regina Barzilay, herself a breast cancer survivor, says that the hope is for systems like these to enable doctors to customize screening and prevention programs at the individual level, making late diagnosis a relic of the past.

Although mammography has been shown to reduce breast cancer mortality, there is continued debate on how often to screen and when to start. While the American Cancer Society recommends annual screening starting at age 45, the U.S. Preventative Task Force recommends screening every two years starting at age 50.

“Rather than taking a one-size-fits-all approach, we can personalize screening around a woman’s risk of developing cancer,” says Barzilay, senior author of a new paper about the project out today in Radiology. “For example, a doctor might recommend that one group of women get a mammogram every other year, while another higher-risk group might get supplemental MRI screening.” Barzilay is the Delta Electronics Professor at CSAIL and the Department of Electrical Engineering and Computer Science at MIT and a member of the Koch Institute for Integrative Cancer Research at MIT.

The team’s model was significantly better at predicting risk than existing approaches: It accurately placed 31 percent of all cancer patients in its highest-risk category, compared to only 18 percent for traditional models.

Harvard Professor Constance Lehman says that there’s previously been minimal support in the medical community for screening strategies that are risk-based rather than age-based.

“This is because before we did not have accurate risk assessment tools that worked for individual women,” says Lehman, a professor of radiology at Harvard Medical School and division chief of breast imaging at MGH. “Our work is the first to show that it’s possible.”  

Barzilay and Lehman co-wrote the paper with lead author Adam Yala, a CSAIL PhD student. Other MIT co-authors include PhD student Tal Schuster and former master’s student Tally Portnoi.

How it works

Since the first breast-cancer risk model from 1989, development has largely been driven by human knowledge and intuition of what major risk factors might be, such as age, family history of breast and ovarian cancer, hormonal and reproductive factors, and breast density.

However, most of these markers are only weakly correlated with breast cancer. As a result, such models still aren’t very accurate at the individual level, and many organizations continue to feel risk-based screening programs are not possible, given those limitations.

Rather than manually identifying the patterns in a mammogram that drive future cancer, the MIT/MGH team trained a deep-learning model to deduce the patterns directly from the data. Using information from more than 90,000 mammograms, the model detected patterns too subtle for the human eye to detect.

“Since the 1960s radiologists have noticed that women have unique and widely variable patterns of breast tissue visible on the mammogram,” says Lehman. “These patterns can represent the influence of genetics, hormones, pregnancy, lactation, diet, weight loss, and weight gain. We can now leverage this detailed information to be more precise in our risk assessment at the individual level.”  

Making cancer detection more equitable

The project also aims to make risk assessment more accurate for racial minorities, in particular. Many early models were developed on white populations, and were much less accurate for other races. The MIT/MGH model, meanwhile, is equally accurate for white and black women. This is especially important given that black women have been shown to be 42 percent more likely to die from breast cancer due to a wide range of factors that may include differences in detection and access to health care.

“It’s particularly striking that the model performs equally as well for white and black people, which has not been the case with prior tools,” says Allison Kurian, an associate professor of medicine and health research/policy at Stanford University School of Medicine. “If validated and made available for widespread use, this could really improve on our current strategies to estimate risk.”

Barzilay says their system could also one day enable doctors to use mammograms to see if patients are at a greater risk for other health problems, like cardiovascular disease or other cancers. The researchers are eager to apply the models to other diseases and ailments, and especially those with less effective risk models, like pancreatic cancer.

“Our goal is to make these advancements a part of the standard of care,” says Yala. “By predicting who will develop cancer in the future, we can hopefully save lives and catch cancer before symptoms ever arise.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

234 Guests, 0 Users

Most Online Today: 280. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles