Evaluating Ontologically-Aware Large Language Models: An Experiment in Sepsis Prediction

Tracking #: 3890-5104

This paper is currently under review
Authors: 
Lucas Gomes Maddalena
Fernanda Araujo Baião
Tiago Prince Sales
Giancarlo Guizzardi1

Responsible editor: 
Aldo Gangemi

Submission type: 
Full Paper
Abstract: 
Early and accurate detection of sepsis during hospitalization is critical, as it is a life-threatening condition with significant implications for patient outcomes. Electronic Health Records (EHRs) offer a wealth of information, including unstructured textual data, often containing more nuanced insights than regular structured data. To process such textual data, a variety of Natural Language Processing (NLP) methods have been employed with limited effectiveness. Recent advancements in computational resources have led to the development of Large Language Models (LLMs), which can effectively process vast amounts of text to identify relationships and patterns between words and structure them into embeddings. This enables LLMs to extract meaningful insights within specific domains. Despite these advances, LLMs face challenges in capturing the real-world semantics of clinical texts, which are critical for understanding the complex interconnections among terms and ensuring terminological precision. This work proposes a case study using Clinical KB BERT, an approach for embedding clinical notes of ICU patients that incorporates semantic information from the Unified Medical Language System (UMLS) ontology. By integrating domain-specific knowledge from UMLS, Clinical KB BERT aims to improve the semantic understanding of clinical data, thus enhancing the predictive performance of the resulting models. The present study compares Clinical KB BERT against Clinical BERT, a widely used model in the healthcare domain. The experimental results demonstrate that semantically enriched embeddings produced a more accurate and less uncertain model for the early prediction of sepsis. Specifically, it increased the Area Under the Receiver Operating Characteristic Curve (AUC-ROC) from 0.826 to 0.853, while the mean predictive entropy for the entire test dataset decreased from 0.159 to 0.142. Furthermore, the reduction in mean predictive entropy was even more pronounced in cases where both models made correct predictions, decreasing from 0.148 to 0.129. Noteworthy, the practical impacts of these improvements include a substantial decrease in the number of false negatives (from 162 to 128, out of 227 septic cases), emphasizing the ability of the semantically aware model in reducing missed early diagnoses, and improving patient outcomes.
Full PDF Version: 
Tags: 
Under Review