ComprehENotes, an Instrument to Assess Patient Reading Comprehension of Electronic Health Record Notes: Development and Validation
dc.contributor.author | Lalor, John P. | |
dc.contributor.author | Wu, Hao | |
dc.contributor.author | Chen, Li | |
dc.contributor.author | Mazor, Kathleen M. | |
dc.contributor.author | Yu, Hong | |
dc.date | 2022-08-11T08:09:50.000 | |
dc.date.accessioned | 2022-08-23T16:45:11Z | |
dc.date.available | 2022-08-23T16:45:11Z | |
dc.date.issued | 2018-04-25 | |
dc.date.submitted | 2018-06-15 | |
dc.identifier.citation | <p>J Med Internet Res. 2018 Apr 25;20(4):e139. doi: 10.2196/jmir.9380. <a href="https://doi.org/10.2196/jmir.9380">Link to article on publisher's site</a></p> | |
dc.identifier.issn | 1438-8871 (Linking) | |
dc.identifier.doi | 10.2196/jmir.9380 | |
dc.identifier.pmid | 29695372 | |
dc.identifier.uri | http://hdl.handle.net/20.500.14038/40634 | |
dc.description.abstract | BACKGROUND: Patient portals are widely adopted in the United States and allow millions of patients access to their electronic health records (EHRs), including their EHR clinical notes. A patient's ability to understand the information in the EHR is dependent on their overall health literacy. Although many tests of health literacy exist, none specifically focuses on EHR note comprehension. OBJECTIVE: The aim of this paper was to develop an instrument to assess patients' EHR note comprehension. METHODS: We identified 6 common diseases or conditions (heart failure, diabetes, cancer, hypertension, chronic obstructive pulmonary disease, and liver failure) and selected 5 representative EHR notes for each disease or condition. One note that did not contain natural language text was removed. Questions were generated from these notes using Sentence Verification Technique and were analyzed using item response theory (IRT) to identify a set of questions that represent a good test of ability for EHR note comprehension. RESULTS: Using Sentence Verification Technique, 154 questions were generated from the 29 EHR notes initially obtained. Of these, 83 were manually selected for inclusion in the Amazon Mechanical Turk crowdsourcing tasks and 55 were ultimately retained following IRT analysis. A follow-up validation with a second Amazon Mechanical Turk task and IRT analysis confirmed that the 55 questions test a latent ability dimension for EHR note comprehension. A short test of 14 items was created along with the 55-item test. CONCLUSIONS: We developed ComprehENotes, an instrument for assessing EHR note comprehension from existing EHR notes, gathered responses using crowdsourcing, and used IRT to analyze those responses, thus resulting in a set of questions to measure EHR note comprehension. Crowdsourced responses from Amazon Mechanical Turk can be used to estimate item parameters and select a subset of items for inclusion in the test set using IRT. The final set of questions is the first test of EHR note comprehension. | |
dc.language.iso | en_US | |
dc.relation | <p><a href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&list_uids=29695372&dopt=Abstract">Link to Article in PubMed</a></p> | |
dc.rights | Copyright © John P Lalor, Hao Wu, Li Chen, Kathleen M Mazor, Hong Yu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 25.04.2018. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included. | |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | |
dc.subject | crowdsourcing | |
dc.subject | electronic health records | |
dc.subject | health literacy | |
dc.subject | psychometrics | |
dc.subject | Health Information Technology | |
dc.subject | Health Services Administration | |
dc.subject | Health Services Research | |
dc.subject | Information Literacy | |
dc.subject | Public Health Education and Promotion | |
dc.title | ComprehENotes, an Instrument to Assess Patient Reading Comprehension of Electronic Health Record Notes: Development and Validation | |
dc.type | Journal Article | |
dc.source.journaltitle | Journal of medical Internet research | |
dc.source.volume | 20 | |
dc.source.issue | 4 | |
dc.identifier.legacyfulltext | https://escholarship.umassmed.edu/cgi/viewcontent.cgi?article=4449&context=oapubs&unstamped=1 | |
dc.identifier.legacycoverpage | https://escholarship.umassmed.edu/oapubs/3438 | |
dc.identifier.contextkey | 12326409 | |
refterms.dateFOA | 2022-08-23T16:45:11Z | |
html.description.abstract | <p>BACKGROUND: Patient portals are widely adopted in the United States and allow millions of patients access to their electronic health records (EHRs), including their EHR clinical notes. A patient's ability to understand the information in the EHR is dependent on their overall health literacy. Although many tests of health literacy exist, none specifically focuses on EHR note comprehension.</p> <p>OBJECTIVE: The aim of this paper was to develop an instrument to assess patients' EHR note comprehension.</p> <p>METHODS: We identified 6 common diseases or conditions (heart failure, diabetes, cancer, hypertension, chronic obstructive pulmonary disease, and liver failure) and selected 5 representative EHR notes for each disease or condition. One note that did not contain natural language text was removed. Questions were generated from these notes using Sentence Verification Technique and were analyzed using item response theory (IRT) to identify a set of questions that represent a good test of ability for EHR note comprehension.</p> <p>RESULTS: Using Sentence Verification Technique, 154 questions were generated from the 29 EHR notes initially obtained. Of these, 83 were manually selected for inclusion in the Amazon Mechanical Turk crowdsourcing tasks and 55 were ultimately retained following IRT analysis. A follow-up validation with a second Amazon Mechanical Turk task and IRT analysis confirmed that the 55 questions test a latent ability dimension for EHR note comprehension. A short test of 14 items was created along with the 55-item test.</p> <p>CONCLUSIONS: We developed ComprehENotes, an instrument for assessing EHR note comprehension from existing EHR notes, gathered responses using crowdsourcing, and used IRT to analyze those responses, thus resulting in a set of questions to measure EHR note comprehension. Crowdsourced responses from Amazon Mechanical Turk can be used to estimate item parameters and select a subset of items for inclusion in the test set using IRT. The final set of questions is the first test of EHR note comprehension.</p> | |
dc.identifier.submissionpath | oapubs/3438 | |
dc.contributor.department | Department of Medicine | |
dc.contributor.department | Meyers Primary Care Institute | |
dc.source.pages | e139 |