Deployment of generative artificial intelligence technologies could ease administrative demands on doctors and assist with making clinical decisions in the future, an AI healthcare expert with the Department of Veterans Affairs (VA) said on Oct. 16.  

Speaking at the 2024 Google Public Sector Summit in Washington, Kaeli Yuen, data and AI health product lead in VA’s Office of the Chief Technology Officer, said that while GenAI remains in the pre-pilot stages at the agency, the technology can help navigate healthcare burdens and manage clinical medical data.  

A top VA priority includes using AI scribes which can transcribe clinical encounters and generate notes in medical environments, allowing doctors to use their time more efficiently by no longer needing to read and write those notes manually, Yuen said.  

“Now they can open up their schedules and have more access, and it’s hopefully easier for patients to get an appointment more quickly,” said Yuen, noting that the VA is hoping to move forward with piloting by the end of the year.  

Earlier this summer, the VA’s Strategic Acquisition Center announced its plans to award sole-source awards to Abridge AI, Inc., and Nuance Communications Inc., for developing cloud-based ambient scribe pilots. The awards were announced after the businesses won the VA’s AI Tech Sprint.  

Other areas of interest include using generative AI to extract concepts from free text clinical notes – typed notes from medical practitioners – to improve clinical decision making.  

Most clinical knowledge and data are stored in free text notes which are largely inaccessible to researchers, according to the National Institutes of Health.  

“There’s some astronomical volume of new medical knowledge every year, every month, every day, and it’s not realistic for a human being to keep up with that be able to interpret it and apply it to their practice,” said Yuen. “So that way of practicing medicine is not going to continue to work.” 

Yuen said that the “natural evolution” of medicine includes involving AI tools to help with decision making – which brings its own ethical challenges of what happens when the AI tool is wrong, or generalizes.  

“I’m interested in the potential of predictive machine learning methods […] but I am also very cautious of it and generalizing the results of a model trained not specific to a certain population and certain environment to then use that to make a decision about an individual from a slightly different context,” she said.  

While robots are unlikely to ever give a diagnosis to a patient and will be used as a tool in the diagnosis and data management process, VA is also piloting an AI tool for VA employees who directly serve veterans. That tool can assist with quickly and accurately answering questions that veterans may have to assist in their transition from military to veteran life.  

Yuen said that the VA’s general approach to AI has been conservative until risks – such as AI hallucinations – can be addressed. She added that while the VA wants to be careful, it also  doesn’t want to “throttle innovation.” 

“Our office is responsible for AI governance in addition to AI pilots, and one feature of our AI governance process is that we are not a blocker, we are not a gate to a pilot going live,” said Yuen, noting that all VA AI use cases must undergo a governance process to mitigate risk.  

Read More About
Recent
More Topics
About
Weslan Hansen
Weslan Hansen
Weslan Hansen is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags