Show simple item record

dc.contributor.authorLalor, John P.
dc.contributor.authorWu, Hao
dc.contributor.authorChen, Li
dc.contributor.authorMazor, Kathleen M.
dc.contributor.authorYu, Hong
dc.date2022-08-11T08:09:50.000
dc.date.accessioned2022-08-23T16:45:11Z
dc.date.available2022-08-23T16:45:11Z
dc.date.issued2018-04-25
dc.date.submitted2018-06-15
dc.identifier.citation<p>J Med Internet Res. 2018 Apr 25;20(4):e139. doi: 10.2196/jmir.9380. <a href="https://doi.org/10.2196/jmir.9380">Link to article on publisher's site</a></p>
dc.identifier.issn1438-8871 (Linking)
dc.identifier.doi10.2196/jmir.9380
dc.identifier.pmid29695372
dc.identifier.urihttp://hdl.handle.net/20.500.14038/40634
dc.description.abstractBACKGROUND: Patient portals are widely adopted in the United States and allow millions of patients access to their electronic health records (EHRs), including their EHR clinical notes. A patient's ability to understand the information in the EHR is dependent on their overall health literacy. Although many tests of health literacy exist, none specifically focuses on EHR note comprehension. OBJECTIVE: The aim of this paper was to develop an instrument to assess patients' EHR note comprehension. METHODS: We identified 6 common diseases or conditions (heart failure, diabetes, cancer, hypertension, chronic obstructive pulmonary disease, and liver failure) and selected 5 representative EHR notes for each disease or condition. One note that did not contain natural language text was removed. Questions were generated from these notes using Sentence Verification Technique and were analyzed using item response theory (IRT) to identify a set of questions that represent a good test of ability for EHR note comprehension. RESULTS: Using Sentence Verification Technique, 154 questions were generated from the 29 EHR notes initially obtained. Of these, 83 were manually selected for inclusion in the Amazon Mechanical Turk crowdsourcing tasks and 55 were ultimately retained following IRT analysis. A follow-up validation with a second Amazon Mechanical Turk task and IRT analysis confirmed that the 55 questions test a latent ability dimension for EHR note comprehension. A short test of 14 items was created along with the 55-item test. CONCLUSIONS: We developed ComprehENotes, an instrument for assessing EHR note comprehension from existing EHR notes, gathered responses using crowdsourcing, and used IRT to analyze those responses, thus resulting in a set of questions to measure EHR note comprehension. Crowdsourced responses from Amazon Mechanical Turk can be used to estimate item parameters and select a subset of items for inclusion in the test set using IRT. The final set of questions is the first test of EHR note comprehension.
dc.language.isoen_US
dc.relation<p><a href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&list_uids=29695372&dopt=Abstract">Link to Article in PubMed</a></p>
dc.rightsCopyright © John P Lalor, Hao Wu, Li Chen, Kathleen M Mazor, Hong Yu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 25.04.2018. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectcrowdsourcing
dc.subjectelectronic health records
dc.subjecthealth literacy
dc.subjectpsychometrics
dc.subjectHealth Information Technology
dc.subjectHealth Services Administration
dc.subjectHealth Services Research
dc.subjectInformation Literacy
dc.subjectPublic Health Education and Promotion
dc.titleComprehENotes, an Instrument to Assess Patient Reading Comprehension of Electronic Health Record Notes: Development and Validation
dc.typeJournal Article
dc.source.journaltitleJournal of medical Internet research
dc.source.volume20
dc.source.issue4
dc.identifier.legacyfulltexthttps://escholarship.umassmed.edu/cgi/viewcontent.cgi?article=4449&amp;context=oapubs&amp;unstamped=1
dc.identifier.legacycoverpagehttps://escholarship.umassmed.edu/oapubs/3438
dc.identifier.contextkey12326409
refterms.dateFOA2022-08-23T16:45:11Z
html.description.abstract<p>BACKGROUND: Patient portals are widely adopted in the United States and allow millions of patients access to their electronic health records (EHRs), including their EHR clinical notes. A patient's ability to understand the information in the EHR is dependent on their overall health literacy. Although many tests of health literacy exist, none specifically focuses on EHR note comprehension.</p> <p>OBJECTIVE: The aim of this paper was to develop an instrument to assess patients' EHR note comprehension.</p> <p>METHODS: We identified 6 common diseases or conditions (heart failure, diabetes, cancer, hypertension, chronic obstructive pulmonary disease, and liver failure) and selected 5 representative EHR notes for each disease or condition. One note that did not contain natural language text was removed. Questions were generated from these notes using Sentence Verification Technique and were analyzed using item response theory (IRT) to identify a set of questions that represent a good test of ability for EHR note comprehension.</p> <p>RESULTS: Using Sentence Verification Technique, 154 questions were generated from the 29 EHR notes initially obtained. Of these, 83 were manually selected for inclusion in the Amazon Mechanical Turk crowdsourcing tasks and 55 were ultimately retained following IRT analysis. A follow-up validation with a second Amazon Mechanical Turk task and IRT analysis confirmed that the 55 questions test a latent ability dimension for EHR note comprehension. A short test of 14 items was created along with the 55-item test.</p> <p>CONCLUSIONS: We developed ComprehENotes, an instrument for assessing EHR note comprehension from existing EHR notes, gathered responses using crowdsourcing, and used IRT to analyze those responses, thus resulting in a set of questions to measure EHR note comprehension. Crowdsourced responses from Amazon Mechanical Turk can be used to estimate item parameters and select a subset of items for inclusion in the test set using IRT. The final set of questions is the first test of EHR note comprehension.</p>
dc.identifier.submissionpathoapubs/3438
dc.contributor.departmentDepartment of Medicine
dc.contributor.departmentMeyers Primary Care Institute
dc.source.pagese139


Files in this item

Thumbnail
Name:
0e0b2b0bec4af6d526b81567386066 ...
Size:
795.6Kb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record

Copyright © John P Lalor, Hao Wu, Li Chen, Kathleen M Mazor, Hong Yu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 25.04.2018. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.
Except where otherwise noted, this item's license is described as Copyright © John P Lalor, Hao Wu, Li Chen, Kathleen M Mazor, Hong Yu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 25.04.2018. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.