Show simple item record

dc.contributor.authorChen, Jinying
dc.contributor.authorZheng, Jiaping
dc.contributor.authorYu, Hong
dc.date2022-08-11T08:09:46.000
dc.date.accessioned2022-08-23T16:42:53Z
dc.date.available2022-08-23T16:42:53Z
dc.date.issued2016-11-30
dc.date.submitted2017-03-27
dc.identifier.citationJMIR Med Inform. 2016 Nov 30;4(4):e40. <a href="https://doi.org/10.2196/medinform.6373">Link to article on publisher's site</a>
dc.identifier.doi10.2196/medinform.6373
dc.identifier.pmid27903489
dc.identifier.urihttp://hdl.handle.net/20.500.14038/40182
dc.description.abstractBACKGROUND: Many health organizations allow patients to access their own electronic health record (EHR) notes through online patient portals as a way to enhance patient-centered care. However, EHR notes are typically long and contain abundant medical jargon that can be difficult for patients to understand. In addition, many medical terms in patients' notes are not directly related to their health care needs. One way to help patients better comprehend their own notes is to reduce information overload and help them focus on medical terms that matter most to them. Interventions can then be developed by giving them targeted education to improve their EHR comprehension and the quality of care. OBJECTIVE: We aimed to develop a supervised natural language processing (NLP) system called Finding impOrtant medical Concepts most Useful to patientS (FOCUS) that automatically identifies and ranks medical terms in EHR notes based on their importance to the patients. METHODS: First, we built an expert-annotated corpus. For each EHR note, 2 physicians independently identified medical terms important to the patient. Using the physicians' agreement as the gold standard, we developed and evaluated FOCUS. FOCUS first identifies candidate terms from each EHR note using MetaMap and then ranks the terms using a support vector machine-based learn-to-rank algorithm. We explored rich learning features, including distributed word representation, Unified Medical Language System semantic type, topic features, and features derived from consumer health vocabulary. We compared FOCUS with 2 strong baseline NLP systems. RESULTS: Physicians annotated 90 EHR notes and identified a mean of 9 (SD 5) important terms per note. The Cohen's kappa annotation agreement was .51. The 10-fold cross-validation results show that FOCUS achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.940 for ranking candidate terms from EHR notes to identify important terms. When including term identification, the performance of FOCUS for identifying important terms from EHR notes was 0.866 AUC-ROC. Both performance scores significantly exceeded the corresponding baseline system scores (P < .001). Rich learning features contributed to FOCUS's performance substantially. CONCLUSIONS: FOCUS can automatically rank terms from EHR notes based on their importance to patients. It may help develop future interventions that improve quality of care.
dc.language.isoen_US
dc.relation<a href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&list_uids=27903489&dopt=Abstract">Link to Article in PubMed</a>
dc.rightsCopyright © Jinying Chen, Jiaping Zheng, Hong Yu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on http://medinform.jmir.org/, as well as this copyright and license information must be included.
dc.subjectelectronic health records
dc.subjectinformation extraction
dc.subjectlearning to rank
dc.subjectnatural language processing
dc.subjectsupervised learning
dc.subjectComputer Sciences
dc.subjectHealth Information Technology
dc.titleFinding Important Terms for Patients in Their Electronic Health Records: A Learning-to-Rank Approach Using Expert Annotations
dc.typeJournal Article
dc.source.journaltitleJMIR medical informatics
dc.source.volume4
dc.source.issue4
dc.identifier.legacyfulltexthttps://escholarship.umassmed.edu/cgi/viewcontent.cgi?article=3985&amp;context=oapubs&amp;unstamped=1
dc.identifier.legacycoverpagehttps://escholarship.umassmed.edu/oapubs/2980
dc.identifier.contextkey9928031
refterms.dateFOA2022-08-23T16:42:53Z
html.description.abstract<p>BACKGROUND: Many health organizations allow patients to access their own electronic health record (EHR) notes through online patient portals as a way to enhance patient-centered care. However, EHR notes are typically long and contain abundant medical jargon that can be difficult for patients to understand. In addition, many medical terms in patients' notes are not directly related to their health care needs. One way to help patients better comprehend their own notes is to reduce information overload and help them focus on medical terms that matter most to them. Interventions can then be developed by giving them targeted education to improve their EHR comprehension and the quality of care.</p> <p>OBJECTIVE: We aimed to develop a supervised natural language processing (NLP) system called Finding impOrtant medical Concepts most Useful to patientS (FOCUS) that automatically identifies and ranks medical terms in EHR notes based on their importance to the patients.</p> <p>METHODS: First, we built an expert-annotated corpus. For each EHR note, 2 physicians independently identified medical terms important to the patient. Using the physicians' agreement as the gold standard, we developed and evaluated FOCUS. FOCUS first identifies candidate terms from each EHR note using MetaMap and then ranks the terms using a support vector machine-based learn-to-rank algorithm. We explored rich learning features, including distributed word representation, Unified Medical Language System semantic type, topic features, and features derived from consumer health vocabulary. We compared FOCUS with 2 strong baseline NLP systems.</p> <p>RESULTS: Physicians annotated 90 EHR notes and identified a mean of 9 (SD 5) important terms per note. The Cohen's kappa annotation agreement was .51. The 10-fold cross-validation results show that FOCUS achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.940 for ranking candidate terms from EHR notes to identify important terms. When including term identification, the performance of FOCUS for identifying important terms from EHR notes was 0.866 AUC-ROC. Both performance scores significantly exceeded the corresponding baseline system scores (P < .001). Rich learning features contributed to FOCUS's performance substantially.</p> <p>CONCLUSIONS: FOCUS can automatically rank terms from EHR notes based on their importance to patients. It may help develop future interventions that improve quality of care.</p>
dc.identifier.submissionpathoapubs/2980
dc.contributor.departmentDepartment of Quantitative Health Sciences
dc.source.pagese40


Files in this item

Thumbnail
Name:
fc_xsltGalley_6373_105892_117_ ...
Size:
813.1Kb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record