AI is transforming healthcare: 5 things to know

May 28, 2023

The decisions of black box AI models must be understandable to be trustedTrustworthiness of algorithms needs to be more widely addressed, says Gröhn. Creating AI that can explain decisions in ways that humans find easy to understand requires developing different types of algorithms. Continuous testing and validation of AI in healthcare is needed,” emphasizes Van Leemput. Gröhn says that, ideally, AI will integrate background knowledge from other domains, such as biology and economics, always depending on the use case for that AI system. “AI or data science by itself isn’t enough, because one size does not fit all when it comes to health technologies.

Another issue in commercializing AI healthtech is the data shift problem, says Van Leemput. “You have training data and an algorithm that work for one type of scanner or for one study, and you sell a product based on that, which then fails in a different context or with different hardware. Studies have shown that many commercially available systems have not been validated thoroughly in this respect.” To counter this, Van Leemput is developing methods that can automatically adapt to changes in the images and handle artifacts and limitations more robustly.

3. The decisions of black box AI models must be understandable to be trusted

 Trustworthiness of algorithms needs to be more widely addressed, says Gröhn. “An emphasis on predictive performance often shifts attention away from understanding the model’s logic. There needs to be trust of a reasonable logic behind a model, and an understanding of limitations and the uncertainty of the prediction, because no model is perfect.” Data scientists, doctors, and social scientists are actively working together to make models more transparent and acceptable.

“It’s hard for a person to understand how a neural network estimates your disease status from a brain scan, for example,” adds Van Leemput. Creating AI that can explain decisions in ways that humans find easy to understand requires developing different types of algorithms. When doctors explain their decisions, they will often emphasize how specific conditions cause specific changes in anatomy, and how a different diagnosis would have required the images to look differently. This is a process fundamentally distinct from simply assigning a diagnosis based on how similar an image looks to other images in a training dataset, which is the current prevailing approach in AI.

 4. AI will not replace doctors

But it will enable doctors to be more efficient with their time. “AI can do boring tasks faster and can help detect subtle changes more accurately. If you make a better bicycle, it does not replace the cyclist, or in the case of image analysis, the radiologist,” says Van Leemput.

Gröhn agrees: “Machines can see more pixels, so it’s smart to integrate their capabilities into detection, then doctors can concentrate on difficult cases. If a machine can diagnose 90 percent of patients very reliably, it makes sense to let the machine to do that. On the other hand, it is very important that the machine clearly informs about its limitations for the remaining 10 percent.” The importance of tacit knowledge and lived experience will continue to be human assets.

As the cost of healthcare increases, we may see AI as a solution to replacing expensive services, but this should be viewed carefully, says Gröhn. “Computer science may provide algorithmic answers to how resources should be allocated, but we can’t ignore societal concerns or reduce the quality of care based solely on these predictions.”

5. Regulation and interdisciplinary expertise are vital for developing AI that is both useful and fair

There is plenty of hype in the healthtech industry. “Someone can raise money and sell their model or product to a hospital, and it can fail without warning because it was trained on data from somewhere else. We absolutely need regulation to ensure that AI methods are tested and validated in the real world and supported by evidence,” says Van Leemput.

Methods that work well at some point can also become outdated, with performance degrading as time passes. “Little things, like changing the order of various options in a form at a hospital, can have dramatic effects, where your software suddenly stops predicting well. Continuous testing and validation of AI in healthcare is needed,” emphasizes Van Leemput.

Gröhn says that, ideally, AI will integrate background knowledge from other domains, such as biology and economics, always depending on the use case for that AI system. “AI or data science by itself isn’t enough, because one size does not fit all when it comes to health technologies. Domain experts can inform our understanding of what good quality data is,” says Gröhn, citing the goals of the Datalit project where he is conducting his PhD research.

The source of this news is from Aalto University

Popular in Research

1

Mar 19, 2024

Nancy Hopkins awarded the National Academy of Sciences Public Welfare Medal

2

Mar 14, 2024

MIT Faculty Founder Initiative announces finalists for second competition

3

Mar 12, 2024

Corinne Bailey Rae is the 2024 Spring Artist-in-Residence at NYU’s Clive Davis Institute of Recorded Music

4

Mar 11, 2024

MSCA Postdoctoral Fellowship 2024: Call for expression of interest

5

Mar 9, 2024

Use of cultural specific terms in times of crises can cause greater health inequalities

White House fights back against age comments in Biden probe

18 hours ago

Sean ‘Diddy’ Combs blasts feds’ ‘military-level force’ during raid of his homes, calls investigation a ‘witch hunt’

18 hours ago

Boeing CEO Dave Calhoun to step down; board chair and commercial airplane head replaced in wake of 737 Max crisis

1 day ago

Biden, Promising Corporate Tax Increases, Has Cut Taxes Overall

1 day ago

Remembering MIT Copytech Director Casey Harrington

18 hours ago

Noubar Afeyan PhD ’87 to deliver MIT’s 2024 Commencement address

2 days ago