• Login
    View Item 
    •   Home
    • UMass Chan Faculty and Staff Research and Publications
    • UMass Chan Faculty and Researcher Publications
    • View Item
    •   Home
    • UMass Chan Faculty and Staff Research and Publications
    • UMass Chan Faculty and Researcher Publications
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of eScholarship@UMassChanCommunitiesPublication DateAuthorsUMass Chan AffiliationsTitlesDocument TypesKeywordsThis CollectionPublication DateAuthorsUMass Chan AffiliationsTitlesDocument TypesKeywords

    My Account

    LoginRegister

    Help

    AboutSubmission GuidelinesData Deposit PolicySearchingTerms of UseWebsite Migration FAQ

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Finding Important Terms for Patients in Their Electronic Health Records: A Learning-to-Rank Approach Using Expert Annotations

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    Name:
    fc_xsltGalley_6373_105892_117_ ...
    Size:
    813.1Kb
    Format:
    PDF
    Download
    Authors
    Chen, Jinying
    Zheng, Jiaping
    Yu, Hong
    UMass Chan Affiliations
    Department of Quantitative Health Sciences
    Document Type
    Journal Article
    Publication Date
    2016-11-30
    Keywords
    electronic health records
    information extraction
    learning to rank
    natural language processing
    supervised learning
    Computer Sciences
    Health Information Technology
    
    Metadata
    Show full item record
    Abstract
    BACKGROUND: Many health organizations allow patients to access their own electronic health record (EHR) notes through online patient portals as a way to enhance patient-centered care. However, EHR notes are typically long and contain abundant medical jargon that can be difficult for patients to understand. In addition, many medical terms in patients' notes are not directly related to their health care needs. One way to help patients better comprehend their own notes is to reduce information overload and help them focus on medical terms that matter most to them. Interventions can then be developed by giving them targeted education to improve their EHR comprehension and the quality of care. OBJECTIVE: We aimed to develop a supervised natural language processing (NLP) system called Finding impOrtant medical Concepts most Useful to patientS (FOCUS) that automatically identifies and ranks medical terms in EHR notes based on their importance to the patients. METHODS: First, we built an expert-annotated corpus. For each EHR note, 2 physicians independently identified medical terms important to the patient. Using the physicians' agreement as the gold standard, we developed and evaluated FOCUS. FOCUS first identifies candidate terms from each EHR note using MetaMap and then ranks the terms using a support vector machine-based learn-to-rank algorithm. We explored rich learning features, including distributed word representation, Unified Medical Language System semantic type, topic features, and features derived from consumer health vocabulary. We compared FOCUS with 2 strong baseline NLP systems. RESULTS: Physicians annotated 90 EHR notes and identified a mean of 9 (SD 5) important terms per note. The Cohen's kappa annotation agreement was .51. The 10-fold cross-validation results show that FOCUS achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.940 for ranking candidate terms from EHR notes to identify important terms. When including term identification, the performance of FOCUS for identifying important terms from EHR notes was 0.866 AUC-ROC. Both performance scores significantly exceeded the corresponding baseline system scores (P < .001). Rich learning features contributed to FOCUS's performance substantially. CONCLUSIONS: FOCUS can automatically rank terms from EHR notes based on their importance to patients. It may help develop future interventions that improve quality of care.
    Source
    JMIR Med Inform. 2016 Nov 30;4(4):e40. Link to article on publisher's site
    DOI
    10.2196/medinform.6373
    Permanent Link to this Item
    http://hdl.handle.net/20.500.14038/40182
    PubMed ID
    27903489
    Related Resources
    Link to Article in PubMed
    Rights
    Copyright © Jinying Chen, Jiaping Zheng, Hong Yu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on http://medinform.jmir.org/, as well as this copyright and license information must be included.
    ae974a485f413a2113503eed53cd6c53
    10.2196/medinform.6373
    Scopus Count
    Collections
    UMass Chan Faculty and Researcher Publications

    entitlement

    DSpace software (copyright © 2002 - 2023)  DuraSpace
    Lamar Soutter Library, UMass Chan Medical School | 55 Lake Avenue North | Worcester, MA 01655 USA
    Quick Guide | escholarship@umassmed.edu
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.