February 27, 2024 – When you send your doctor a message about an appointment, a prescription refill, or to reply an issue, is it actually artificial intelligence or an individual responding? In some cases it's hard to inform.
AI can now be involved in your healthcare without you even realizing it. For example, many patients inform their doctor about their medical records via a web based portal.
“And there are some hospital systems that are experimenting with having AI create the first draft of the answer,” I. Glenn Cohen said during a webinar hosted by the National Institute of Health Care Management Foundation.
Assigning administrative tasks is a comparatively low-risk method to introduce using artificial intelligence in health care, said Cohen, an attorney and director of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School in Boston. Technology can liberate staff time now spent answering calls or messages on routine tasks.
But should patients know that when the technology answers clinical questions, AI will generate the primary response? Do patients need to fill out a separate consent form or is that going too far?
What if a physician makes a advice based partly on AI?
Cohen shared an example. A patient and a physician resolve which embryos from in vitro fertilization (IVF) ought to be implanted. The doctor makes recommendations based partly on molecular images and other aspects determined by AI or a machine learning system, but doesn’t disclose them. “Is it a problem that your doctor didn’t tell you?”
Where are we in terms of liability?
Lawsuits could be method to gauge acceptance of latest technologies. “There have been shockingly few cases on medical AI liability,” Cohen said. “Most of the ones we've actually seen have been around surgical robots, where it's probably not really the AI that's causing the problems.”
It is feasible that cases might be settled out of court, Cohen said. “But overall, I think given the data, people are probably overestimating the importance of liability issues in this area. But we should still try to understand it.”
Cohen and colleagues analyzed the legal issues surrounding AI in 2019 point of view in the Journal of the American Medical Association. The bottom line for doctors: As long as they adhere to the usual of care, they’re probably protected, Cohen said. The safest method to use medical AI in terms of liability is to make use of it to verify decisions fairly than attempting to use it to enhance care.
Cohen warned that using AI could grow to be the usual of care sooner or later in the longer term. If this happens, there could also be a risk of liability not with AI.
Insurers depend on AI
Insurance company Guidewell/Florida Blue is already introducing AI and machine learning models into its interactions with members, said Svetlana Bender, PhD, the corporate's vice chairman of AI and behavioral science. Models already discover plan members who may gain advantage from tailored education by directing patients to healthcare facilities aside from the emergency room for medical care when needed. AI may enable faster prior authorization.
“We were able to optimize the reviews of 75% of previous authorization requests with AI,” said Bender.
AI's greater efficiency could also result in cost savings for the healthcare system overall, she said. “It is estimated that we could see somewhere in between $200 [billion] to 360 billion US dollars Savings per year.”
Dealing with complexity
Beyond managing administrative tasks and recommending more personalized interventions, AI could help providers, patients and payers faced with a deluge of healthcare data.
“The amount and complexity of medical and scientific data, as well as the amount and complexity of patient data itself, have increased at an unprecedented and tremendous rate,” said Michael E. Matheny, MD, director of the Center for Improving the Public's Health through Informatics at Vanderbilt University Medical Center in Nashville .
“We really need help managing all of this information,” said Matheny, who can also be a professor of biomedical informatics, medicine and biostatistics at Vanderbilt.
In most current applications, humans review AI outputs, be it aiding drug discovery, image processing, or clinical decision support. But in some cases, the FDA has approved AI applications that work and not using a doctor's interpretation, Matheny said.
Integration of health equity
Some experts are turning to AI to speed up efforts to create a more equitable healthcare system. When developing algorithms, the training data fed into AI and machine learning systems needs to raised represent the U.S. population, for instance.
And then there may be the push for more equitable access. “Do all patients who contribute data to build the model receive its benefits?” asked Cohen.
Leave a Reply