Tech & Health

January 4, 2018

Project seeks to help people interact with medical records

Apple has Siri, Google has Google Assistant, Microsoft has Cortana. A team at Vanderbilt University Medical Center (VUMC) is creating voice-controlled virtual assistant software to help people interact with electronic health records (EHR).

 

(iStock image)

Virtual assistant software has taken off. Amazon has Alexa, Apple has Siri, Google has Google Assistant, Microsoft has Cortana.

A team at Vanderbilt University Medical Center (VUMC) is creating voice-controlled virtual assistant software to help people interact with electronic health records (EHR).

They call their project EVA, short for EHR Voice Assistant. Project leader Yaa Kumah-Crystal, MD, MPH, MA, MS, assistant professor of Biomedical Informatics and Pediatrics, uses her office computer to demonstrate a prototype. She pulls up a patient’s electronic record and asks EVA a question.

“What was the last sodium?”

EVA instantly transcribes and displays the query, then retrieves the clinical test value and indicates whether the result is normal and where it falls within the range of possible sodium results. The prototype responds with text, but EVA will eventually acquire a voice of its own.

“Voice assistants will revolutionize the way people interact with the EHR,” said Kumah-Crystal. “Right now our main focus is the way providers and people on the medical side interact with the computer, but it can be broadened out eventually to encompass patients and anyone else.

“We know there are certain times when it’s going to be really efficient and effective to have information that you can query by voice. For example, if I’m driving to work it’s going to be useful to ask ‘What patients am I going to be seeing today.’”

Another example would be ordering labs while rounding in the hospital.

“What if you could say, ‘Let’s order a BMP’ or ‘Let’s order a CBC,’ and it’s just done? The goal is to make it as natural as having an actual conversation with a really useful intern,” Kumah-Crystal said.

The team is testing various commercial software packages for use as components of EVA, said Peter Shave, executive director of HealthIT systems and architecture. For each voice query the EVA prototype first randomly selects a natural language processing service to convert the query to text. After that processing occurs, “Another component converts the text into meaning, into a model that we then can program from, to do stuff in the EHR. It’s this middle component that’s the more recent advancement that’s made all these at-home devices possible,” Shave said.

The team has begun testing which natural language processors and parsing engines are more accurate, using recordings of people making EVA queries.

“From there we can decide what the best-in-class components would be to build out for EVA,” Kumah-Crystal said.

“Google, Amazon, Microsoft and others offer tool kits and components that can be used for this kind of thing,” said Dan Albert, associate director of HealthIT product development. “We’ve done a bit of work to evaluate those, and the next phase will be to build something to try in real use with a small number of our providers. And that’s one of the cool things about being here at Vanderbilt, that we have a development team that can work hand-in-hand with providers to quickly iterate new technology.”

By February the team plans to begin eliciting feedback from Vanderbilt clinicians, beginning with clinicians who work in outpatient areas. Any Vanderbilt clinician who would like to help test and evaluate EVA should contact Kumah-Crystal by email (yaa.kumah@vanderbilt.edu).

The EVA team is also advising Epic Systems Corp., which is building its own EHR voice assistant. Epic is the vendor for Vanderbilt’s new EHR system.