Loading...
Thumbnail Image
Publication

Extracting Biomedical Factual Knowledge Using Pretrained Language Model and Electronic Health Record Context

Yao, Zonghai
Cao, Yi
Yang, Zhichao
Deshpande, Vijeta
Yu, Hong
Citations
Altmetric:
Student Authors
Faculty Advisor
Academic Program
UMass Chan Affiliations
Document Type
Conference Paper
Publication Date
2023-04-29
Keywords
Subject Area
Embargo Expiration Date
Abstract

Language Models (LMs) have performed well on biomedical natural language processing applications. In this study, we conducted some experiments to use prompt methods to extract knowledge from LMs as new knowledge Bases (LMs as KBs). However, prompting can only be used as a low bound for knowledge extraction, and perform particularly poorly on biomedical domain KBs. In order to make LMs as KBs more in line with the actual application scenarios of the biomedical domain, we specifically add EHR notes as context to the prompt to improve the low bound in the biomedical domain. We design and validate a series of experiments for our Dynamic-Context-BioLAMA task. Our experiments show that the knowledge possessed by those language models can distinguish the correct knowledge from the noise knowledge in the EHR notes, and such distinguishing ability can also be used as a new metric to evaluate the amount of knowledge possessed by the model.

Source

Yao Z, Cao Y, Yang Z, Deshpande V, Yu H. Extracting Biomedical Factual Knowledge Using Pretrained Language Model and Electronic Health Record Context. AMIA Annu Symp Proc. 2023 Apr 29;2022:1188-1197. PMID: 37128373; PMCID: PMC10148358.

Year of Medical School at Time of Visit
Sponsors
Dates of Travel
DOI
PubMed ID
37128373
Other Identifiers
Notes
Funding and Acknowledgements
Corresponding Author
Related Resources
Related Resources
Repository Citation
Rights
Copyright ©2022 AMIA - All rights reserved. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose
Distribution License