AI is beginning to move from the margins of medicine into its operational core.
From chatbots fielding symptom queries to machine learning tools reviewing thousands of medical images, AI is moving from the margins of medicine to its operational core.
A decade ago, artificial intelligence in healthcare was more promise than practice – an idea rooted in predictive models and research pilots, not patient care. However, even then, the data was already accumulating. Across the healthcare industry, information was being generated at a scale few industries could match, from electronic health records and diagnostic scans to insurance claims and pharmacy transactions.
It was this data that laid the groundwork for the widespread use of AI in medicine today – the result of years of accumulated health records, diagnostic scans, and behavioral patterns finally being structured, accessed, and interpreted at scale. From chatbots fielding late-night symptom queries to machine learning tools reviewing thousands of medical images in seconds, AI is beginning to move from the margins of medicine into its operational core. Not only because the models are more advanced, but because the data is finally being put to use.
Some of the most immediate applications are in diagnosis. The US Food and Drug Administration (FDA) has already authorized more than 1,000 AI- and machine learning-enabled medical devices, with over half approved in just the past three years1. Radiology leads the field, with computer-aided detection and diagnosis software experiencing nearly 18% growth during that period.
Hope, hype, and a missing link: data
Radiology’s rapid uptake of AI is no surprise. This specialism sits at the intersection of high data volume, diagnostic complexity, and clinical urgency. Radiologists face a constant battle with time and volume, often reviewing hundreds of scans under pressure. It’s a workload that makes the case for assistance, especially from tools that can process information at scale.
AI-powered image recognition can now detect patterns, anomalies, or early disease signals with a level of speed and consistency no human could maintain. Rather than acting alone, these tools operate as a second set of eyes, flagging concerns, reducing errors, and in some cases, prompting earlier interventions.
This isn’t the only way AI is becoming a learning partner. Over time, the tools themselves can adapt. As clinicians use AI-assisted platforms to make decisions or review information, their inputs refine the algorithms. In this way, the tools evolve alongside the professionals who use them, not just informing decisions but learning from them.
For patients, the effects of AI are increasingly visible at the front door of the healthcare system. In regions with clinician shortages or high out-of-pocket costs, chatbots and symptom checkers are becoming the first point of contact. AI promises immediacy, anonymity, and accessibility – qualities especially valued by younger, digitally-native populations.
However, the usefulness of tools depends entirely on the quality of their underlying data.
And that’s where the optimism meets its limits.
Healthcare may generate more than two zettabytes of data each year2, but the vast majority remains untapped, locked in incompatible systems, missing key demographic details, or simply never analyzed.