By Michael Vlessides

SAN FRANCISCO—“Jenkins, what’s the patient’s blood pressure?”

If the efforts of a multicenter research team come to fruition, a fully automated—and nonhuman—answer to that question may one day soon be coming to an OR near you. The investigators are developing an anesthesia-specific intraoperative voice assistant they hope will unfetter clinicians from keyboards and screens in the OR, along the way improving situational awareness and patient outcomes.

“As anesthesiologists, we have two roles in the operating room,” said Nathan Goergen, MD, PhD, a resident at the University of Nebraska Medical Center, in Omaha. “First, we need to maintain situational awareness to monitor our patient to make sure we’re providing optimal care. But we also have to document what we’re doing through the electronic health record (EHR), which distracts us from that primary objective.”

The answer to that conundrum is Jenkins, a wearable voice assistant that may eventually allow clinicians the freedom to focus on the patient while still fulfilling charting requirements. As part of that development effort, the researchers utilized both inexpensive Bluetooth headsets and cloud computing technologies such as Google speech-to-text, Mimic text-to-speech engine and several open-source software components, all of which were coded in Python programming language.

“We also hope to provide some closed-loop feedback, too,” Dr. Goergen said. “This is intended to help prevent medical errors and provide high-priority alerts for things that we may not be focusing on at that moment. It also is designed to provide time-sensitive feedback and decision support feedback throughout the case.”

The speech recognition algorithms programmed into the system are designed for flexible and natural interaction. This would allow users to make requests in different ways, such as “Hey, Jenkins, what are the patient’s allergies?” or “Hey, Jenkins, allergies?”

As Dr. Goergen reported at the 2023 annual meeting of the American Society of Anesthesiologists (abstract A1024), the latest iteration of Jenkins can chart up to two medications with doses in a single sentence, and can assess for contraindications such as allergies, intolerances and/or hepatic/renal clearance issues. The system will also report potential problems, and provide confirmation prompts before charting the drug.

“So if, for example, I’m going to push succinylcholine, I would tell Jenkins to chart it for me before I actually do so,” he explained. “Jenkins will alert me if the patient’s chart includes malignant hyperthermia or has a recent lab value reflecting dangerously high potassium levels.”

In addition to fetching laboratory data, Jenkins can chart intraoperative volumes, standard events and quick notes, as well as complex documentation such as intubation notes. Of note, the system will also prompt providers for any missing pieces of information before reading back the finalized note to be charted.

Finally, Jenkins has been tied into the institution’s hospital paging and emergency notification system. As such, users can simply speak into the headset to call for help or request supplies such as a fiberoptic scope or ultrasound. In the future, Dr. Goergen and his colleagues would like to incorporate artificial intelligence into the system so that it can suggest potential treatment plans for patients based on EHR data.

Although Jenkins is still very much in a developmental phase, the researchers believe it may one day help to improve intraoperative care by decreasing the burden of EHR interaction and clinician workload; decreasing time to critical interventions; facilitating real-time decision support; and preempting potential medical errors.

The presentation drew robust discussion from the audience members gathered here, one of whom asked whether Jenkins is pulling live data from the institution’s EHR.

“Right now Jenkins works on simulated data from simulated patients,” Dr. Goergen replied. “But the software is designed so that once we get that access to the EHR, it would be trivial to finish the last 10 or 20 lines of code to push those data to the system and retrieve patient EHR data.”

Another asked who was responsible for determining and setting the various thresholds in Jenkins that would eventually trigger alerts and alarms.

“Those are all programmed at what are considered to be dangerous levels, and are ideally evidence-based,” Dr. Goergen replied. “But at the end of the day, Jenkins is just giving advice. I’m a trained physician and I can interpret the advice it gives me; I don’t have to do what it tells me to do.”

“Did you face any challenges with voice recognition in the operating room?” asked session co-moderator Amita Kundra, MD, an adjunct assistant professor of anesthesiology, perioperative care and pain medicine at NYU Long Island School of Medicine, in Mineola, N.Y. “For example, during something like an orthopedic case, was Jenkins able to pick up on your instructions?”

“We found that your cheap $20 Bluetooth headset was probably not directional enough to get good results in the operating room,” Dr. Goergen replied. “However, the higher fidelity headsets with a directional microphone performed much better and have a much higher noise rejection.”

This article is from the May 2024 print issue.