Discussions about artificial intelligence have proliferated recently as more people have access to programs that can make art or answer questions.
In the health care industry, the move to using AI already is well underway.
Dr. Jose Morey, an Eastern Virginia Medical School radiologist, has been an AI consultant for the White House Office of Science and Technology and the United Nations and has been involved with NASA initiatives. For example, NASA iTech evaluated a fully autonomous surgical robot being developed to perform appendix removals and similar types of surgeries.
Public health predictions, streamlining administration and new drug discovery are among some of the ways for AI already is in use, he said.
“It’s just data and mathematics,” Morey said. “That’s it. It’s data and fancy math and it spits out a solution at the end.”
He said that’s why it’s important to have the engineers and mathematicians to build algorithms, and to have subject matter experts to know what kind of data to feed into it and know how to read what comes out.
“If you have bad data, you’re going to have a bad solution,” Morey said.
About two weeks ago, the American Medical Association voted to develop recommendations around AI to ensure it is able to streamline administrative burdens, provide accurate information and improve the kind of medical advice one day AI may provide.
“AI holds the promise of transforming medicine,” said AMA Trustee Dr. Alexander Ding in a June 13 release. “We don’t want to be chasing technology. Rather, as scientists, we want to use our expertise to structure guidelines, and guardrails to prevent unintended consequences, such as baking in bias and widening disparities, dissemination of incorrect medical advice, or spread of misinformation or disinformation.”
However, just a day later, the AMA released another statement calling for more oversight of AI’s use by insurance companies in reviewing patient claims and requests for prior authorization.
Drug discovery is an area in which AI can streamline the process and potentially revolutionize the current method of drug discovery that takes billions of dollars and decades and still sometimes does not produce an effective remedy, according to Morey.
Riverside Regional Medical Center has been using AI for over 10 years and uses it for health prediction and administrative needs, according to Dr. Charles O. Frazier, senior vice president and chief medical information and innovation officer.
“Though many hear AI and think of ChatGPT, there are various forms and the technology and the systems in place at Riverside are ones that have gone through careful testing and validation,” Frazier said in an email. “For example, one form of artificial intelligence that we employ at Riverside includes several cognitive computing or machine learning models to predict a variety of clinical conditions, including sepsis, clinical deterioration in the hospital, opioid use disorder, risk of readmission, etc.”
He said they are not using AI directly in clinical work other than with prediction modelling, but there could be clinical uses in the future. AI is also used at Riverside for automating processes in billing/accounting, Frazier said.
AI’s role in modelling health outcomes also is being used by Vienna-based ClearForce to help soldiers and veterans. The company is developing a model that can help health providers and the military identify veterans and service members who are more likely to commit suicide.
Previously, the company has worked with Oklahoma and is now working with Virginia, according to retired Marine Col. Mike Hudson, ClearForce’s vice president of insider threat prevention and suicide prevention.
Essentially, AI helps the company flip the model of suicide prevention, according to Hudson. The usual model requires an individual to realize they need help, ask and find it. Using the AI model creates a situation in which the individual can be identified as at a higher suicide risk so health providers and organizations can reach out to see if that soldier or veteran is having mental health struggles.
To do this, the company analyzes the risk factors of suicide and has partnered with states to study data around deaths to find them.
“We can then backwards (work) through that and look at the indicators that took them to that tragic outcome,” Hudson said.
Similarly, University of Virginia researchers received $5.9 million this year to study AI’s use with patients. The school has been using data to help clinicians treat patients for about 20 years, starting by identifying deadly blood infections in premature infants, according to Dr. Randall Moorman, a UVA cardioligist.
“We developed numerical algorithms, let’s call that machine learning,” he said.
Using the machine learning reduced death rates by 20% across nine neonatal ICUs when it was implemented, Moorman said. And in the following years, they’ve expanded the principle — implementing machine learning for early detection of lung failure, deterioration and more.
UVA researchers are part of 13 other center researchers in the Bridge 2 AI program, which will provide data for 100,000 ICU patients for developers to make models to improve health outcomes.
Racial disparities can be bridged or widened depending on how AI is implemented and used, according to Moorman, Morey and a panel on the future of health care at the Richmond Health Summit earlier this month.
Moorman said it is the job of AI researchers and developers to ensure it is equally accurate for everybody.
“So in medicine, my feeling is there’s always going to be somebody smart and informed in between the computer and the patient and that I think is a great safeguard in using artificial intelligence in medicine,” Moorman said.
Researchers, such as Moorman’s colleague Ishan Williams, and Ismail El Moudden of Eastern Virginia Medical School, are looking into cutting these potential inequities through AI off at the pass. El Moudden’s project on using AI to reduce cardiovascular disease disparities in the state received an award from the American Heart Association.
Morey said AI’s use by insurance companies has resulted in situations where care is denied because of presumptive outcomes because of data such as zip codes.
“If you have biased data, you could have a biased AI output and that’s something you have to be aware of,” he said.
Like the stethoscope, AI can play a role in helping providers care for patients, but is ultimately “simply a tool in the doctor’s bag,” Morey said.
“AI has a lot of potential, 1,000%. And it’s doing a lot of good,” he said. “We have to understand its limitations and you still need humans in there to do that.”
A correction was made on June 27, 2023: Due to a reporting error, an earlier version of this article incorrectly stated Col. Mike Hudson’s name. His first name is Michael, not Mark.
Ian Munro, 757-447-4097, ian.munro@virginiamedia.com