Talk to the Veterans Crisis Line now
U.S. flag
An official website of the United States government

Office of Research & Development

print icon sign up for VA Research updates
thumbnail Tim Strebel (left) worked with Dr. Christos Makridis, a senior research advisor at VA’s National Artificial Intelligence Institute, to create the COVID-19 prognostic tool. (Photo by Robert Turtil

Tim Strebel (left) worked with Dr. Christos Makridis, a senior research advisor at VA’s National Artificial Intelligence Institute, to create the COVID-19 prognostic tool. (Photo by Robert Turtil)

New VA tool uses artificial intelligence to predict COVID-19 patient mortality

June 28, 2021

By Mike Richman
VA Research Communications

"The point of the tool is not to make care decisions for clinicians, but to provide them with additional information."

Tim Strebel is no stranger to the spirit of innovation.

Currently a computer programmer focusing on health informatics at the Washington DC VA Medical Center, Strebel has been recognized by VA for his ingenuity.

He’s a two-time winner of the VA “Shark Tank” Award, which honors innovative practices, for developing two software packages to automate the jobs of those who work in prosthetics. The software expedites the billing process for home oxygen users and the ordering of eyeglasses. Both products are used at many VA medical centers. For the eyeglass tool, he also won VA’s Gears of Government Award, which recognizes federal employees and teams whose dedication supports exceptional delivery of key outcomes for the American people.

Strebel, a computer programmer at the DC VA, has earned VA awards for his innovations. (Photo by Robert Turtil)
Strebel, a computer programmer at the DC VA, has earned VA awards for his innovations. (Photo by Robert Turtil)

Strebel, a computer programmer at the DC VA, has earned VA awards for his innovations. (Photo by Robert Turtil)

Now, in what Strebel calls the “most significant” work in his seven-year VA career, he’s developed a tool that uses artificial intelligence to calculate the risk of a COVID-19 patient dying within 120 days of diagnosis. The hope is that clinicians can use those predictions to improve the treatment of their patients. The tool is being piloted at 13 VA medical centers.

Strebel and his colleagues describe the tool in a paper to be published in the journal BMJ Health Care & Informatics.

Artificial intelligence increasingly used in health care

Artificial intelligence (AI) uses computers to simulate human thinking, especially in applications involving large amounts of data. AI is common in the commercial technology sector and is increasingly being used in health care. VA uses AI for purposes such as reducing Veterans’ wait times, identifying Veterans at high risk of suicide, and helping doctors interpret the results of cancer lab tests and choose the best therapies.

Strebel’s tool creates a report that provides AI-generated 120-day mortality risk scores in both inpatient and outpatient settings. The report is based on two models.

The first model assesses conditions about a patient that are known before he or she enters a hospital. It relies heavily on age, body mass index (BMI), and co-existing health conditions that can be found in a Veteran’s electronic health record. Body mass index is a measure of body fat based on one’s height and weight. A high or low BMI may signal health problems.

“It’s no surprise that age and BMI are the most predictive factors for mortality in COVID-19 patients,” Strebel says. “An overwhelming amount of concurrent research confirms this. While a few comorbidities on their own are predictive of mortality, such as diabetes and dementia, we’ve found that the amount of and severity of comorbidities in a patient is the best way to use them to predict mortality.”

A second set of models used to predict inpatient mortality considers many of the same factors from the outpatient models, in addition to Veterans’ lab work and vital signs taken at admission. These extra data points “drastically improve” the accuracy of the model, according to Strebel.

“It would be ideal to have this information for every patient diagnosed with COVID-19,” he says. “But this isn’t practical. Bringing every patient in for these tests could increase the risk of transmission and present logistical challenges. That is why we provide the outpatient models to try and provide clinicians with an additional perspective of who may be at risk based on the information we already know about the patient to promote early treatment.”

However, he adds, “One of the biggest challenges in any AI effort is bias. While age is no doubt one of the leading predictors of death, there are always exceptions. Some patients in their 90s survive COVID-19. Conversely, some really young patients die from COVID-19. In those rare cases, our models may show that older Veterans are at higher risk of death than they may actually be simply because of their age. Usually when this is the case, other strong factors such as BUN [blood urea nitrogen] or lymphocytes may be within a healthy range. To help reduce biased decision making, for each of our models we provide additional models that are mirror copies, except we take age out.”

VA RESEARCH
TOPIC PAGES

The BUN test measures the amount of nitrogen in a person’s blood that comes from the waste product urea. It indicates how well one’s kidneys are working. Lymphocytes are one of several types of white blood cells, a key part of the immune system.

`A significant amount of intellectual curiosity’

The tool produces a score that is a decimal number ranging from zero to one. But a specific score doesn’t necessarily equate with a patient’s chances of living or dying. Instead, a table in the tool allows clinicians to look up a risk score and estimate the probability of a patient’s survival.

“When you train a model, the most commonly used decision point for binary classification would be anything .5 and above,” Strebel explains. “For one of our models, anything .5 and above would suggest that a patient is going to die. Anything .5 or below, this person’s going to live. That’s generally how that binary classification works. But we’re able to break that binary decision point into a larger scale. That way, you can get a sense of who’s more at risk and who’s less at risk. When modeling something as complex as human biology, there are no perfect models—at least none that I’ve seen. Some Veterans have lived, even after being assigned high-risk scores. That’s just how the cookie crumbles. More often than not, that’s not how it goes, but we want to be more cautious.

“The point of the tool is not to make care decisions for clinicians,” he adds, “but to provide them with additional information. When we train our models, we use 80% of our data for training. The additional 20% of data are set aside as an unbiased cohort of patients that allows us to test roughly how the model will score newly admitted patients. There’s a table in the tool that clinicians can use to look up individual risk scores and see what percentage of patients from this unbiased dataset lived and died. That table is in increments of .05.”

Strebel created the tool in collaboration with VA’s National Artificial Intelligence Institute (NAII), which was launched in 2019 to develop AI research and development to support Veterans, their families, survivors, and caregivers. He worked with NAII Director Dr. Gil Alterovitz, a specialist in bioinformatics, and Dr. Christos Makridis, a senior researcher advisor at NAII. The institute is a joint initiative between the VA’s Office of Research and Development and the Secretary’s Center for Strategic Partnerships.

“Tim displayed a significant amount of intellectual curiosity and problem-solving from the start, learning how to do fairly sophisticated programming techniques in just a few months,” Makridis says. “These skills were invaluable in the development of a dashboard that not only contains meaningful statistical measures, but also an accessible user interface.”

Kansas City VA uses prognostic tool the most

The custom dashboard that houses Strebel’s COVID-19 prognostic tool is part of the AI Center for Translation at the DC VA, which was established last year as VA’s first such endeavor. Pilot programs in artificial intelligence are carried out at an AI Center for Translation.

All 13 VA sites piloting the tool have access to the dashboard. Clinicians plug the patient’s numbers in, and the dashboard calculates the patient’s risk score. The tool has generated COVID-19 mortality estimates on 93 patients to date at the Kansas City VA Medical Center, the most of any of the 13 sites. The VA Greater Los Angeles Healthcare System is next with 63.

Interestingly, the tool “doesn’t tell doctors a lot of things they don’t already know,” Strebel says. “For instance, if they’re looking at a patient’s blood urea nitrogen and see that’s high, they may think, `Okay, that patient is at risk for kidney failure,’” he notes. “The model is really saying the same thing and is factoring it into the patient’s probability of death much like a clinician would.”

Tool consolidates information into one place

What’s the advantage of the tool if it works with existing data?

“It consolidates all of the information into one place, for one, and doctors can look at risk comparatives to other patients,” Strebel says. “That’s where they get insight in terms of that. As much as I would like to prop this up and say, `Oh, this is saving lives,’ I don’t really have any stories or hard evidence of that. Nobody’s emailed me and said, `I was using your dashboard. The model showed this patient was at greater risk for mortality. I had missed that, but the model caught it. That helped me to intervene in their care.’”

Dr. Sushant Govindan is a pulmonologist at the Kansas City VA. He agrees with Strebel that the tool “synthesizes a lot of information together in a really efficient way.”

“Maybe a really experienced clinician would know these things, but there are definitely times when the tool would maybe see something,” Govindan says. “There were times when the tool would prompt an investigation for the clinician to check and see. When we were piloting it, we came back to Tim with feedback on this. We made tweaks and changes. But other times, the tool takes things into account that further illuminated some issues with the patient.”

Govindan says the hospital’s palliative care team, which provides end-of-life care, has found the tool to be accurate in calculating the mortality risk of patients within a 120-day period. The data in the tool update once a day at midnight.

“Feedback was generally really good,” he says. “It’s not real time, so it’s a little bit challenging. It’s all based on the data that come in basically midnight to the previous night, so you can’t really use it day of for time-sensitive interventions. For things like rounding in the morning and long-term goals and longer-term outcomes and follow-up, it works really well. So the teams were pretty happy with it.”

The families of the patients never saw the risk score because this was only a pilot project, he adds.

Prognostic tool may serve as model for other AI initiatives

“The palliative care team would talk with the primary team and let them know that the tool was saying, `Hey, there’s a possibility this patient’s pretty high risk,’ Govindan says. “The information would be given to the primary team, which may say, ‘Yeah, we were thinking about having goals-of-care conversations anyway. We’ll probably start doing that today.’ That was obviously really necessary. The primary team would go and assess and see what the patients and their families would want. The tool definitely helped with that. We didn’t take the tool itself and show patients and their families. That information was mostly transmitted between our palliative care team and the in-patient hospital and ICU teams.”

The tool, Strebel says, will be used in its pilot form until its computing resources are needed to pilot other models, such as those that can identify and treat patients with long-term COVID symptoms, including organ failure and chronic lung conditions. The tool could also serve as a model for VA artificial intelligence initiatives that apply to conditions and viruses beyond COVID-19, such as suicide prevention.

AI has the potential to greatly improve clinical experiences and patient outcomes, Strebel notes, but the results must be accessible, interpretable, and actionable.

“The COVID-19 prognostic tool is only the beginning of a broader series of tools that we are in the process of piloting,” Makridis of NAII says. “Given the complexity and expansiveness of medical history data, it’s almost impossible for clinicians to keep track of everything. Our goal is to empower clinician decision-making by creating AI-driven tools that allow them to assess a patient’s risk factor for a particular outcome and identify the primary contributing factors, thereby allowing them to provide more effective and personalized treatments.

“We’ll learn more about the potential of artificial intelligence as time goes on.”

VA Research Currents archives || Sign up for VA Research updates



Questions about the R&D website? Email the Web Team

Any health information on this website is strictly for informational purposes and is not intended as medical advice. It should not be used to diagnose or treat any condition.