Show simple item record

dc.contributor.authorKeller, L. A.
dc.contributor.authorMazor, Kathleen M.
dc.contributor.authorSwaminathan, H.
dc.contributor.authorPugnaire, Michele P.
dc.date2022-08-11T08:11:04.000
dc.date.accessioned2022-08-23T17:31:27Z
dc.date.available2022-08-23T17:31:27Z
dc.date.issued2000-10-01
dc.date.submitted2007-10-22
dc.identifier.citationAcad Med. 2000 Oct;75(10 Suppl):S21-4.
dc.identifier.issn1040-2446 (Print)
dc.identifier.pmid11031163
dc.identifier.urihttp://hdl.handle.net/20.500.14038/50738
dc.description.abstractIn recent years, performance assessments have become increasingly popular in medical education. While the term “performance assessment” can be applied to many different types of assessments, in medical education this term usually refers to some sort of simulated patient encounter, such as an objective structured clinical examination (OSCE) or a computer simulation of an encounter. These types of assessments appeal to many educators because the tasks or items used are often seen as more realistic than items on multiple-choice examinations. However, this increased “realism” or apparent authenticity comes at a cost—performance examinations are typically more time-consuming and expensive both to administer and to score. On an OSCE, each encounter with a standardized patient is typically scored as a single item, often resulting in an examinee's completing only four to eight items in a two-hour testing period. In contrast, an examinee might complete 100 to 150 items during a two-hour multiple-choice examination. The fact that performance examinations are typically relatively short means that test users must pay particular attention to the reliability and validity of test scores. Generalizability theory provides a framework for estimating the relative magnitudes of various sources of error in a set of scores. In most performance assessments, both items and raters are potential sources of error. Generalizability theory allows estimation of the error associated with each of these sources separately, as well as the relevant interaction effects. In a generalizability study (G study), the variance in a set of scores is partitioned in a manner similar to that used in the analysis of variance. However, in a G study the emphasis is not on testing for statistical significance, but rather on assessing the relative magnitudes of the variance components. Depending on the study design, different variance components can be estimated. Once the variance components are estimated, additional analyses can be conducted. The purpose of the present study was to examine the impacts of different G-study designs.
dc.language.isoen_US
dc.relation<a href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=11031163&dopt=Abstract">Link to article in PubMed</a>
dc.relation.urlhttp://journals.lww.com/academicmedicine/Fulltext/2000/10001/An_Investigation_of_the_Impacts_of_Different.7.aspx
dc.subjectAnalysis of Variance
dc.subject*Clinical Competence
dc.subjectComputer Simulation
dc.subjectConfidence Intervals
dc.subjectData Collection
dc.subjectEducational Measurement
dc.subjectHumans
dc.subjectStudents, Medical
dc.subjectLife Sciences
dc.subjectMedicine and Health Sciences
dc.subjectWomen's Studies
dc.titleAn investigation of the impacts of different generalizability study designs on estimates of variance components and generalizability coefficients
dc.typeJournal Article
dc.source.journaltitleAcademic medicine : journal of the Association of American Medical Colleges
dc.source.volume75
dc.source.issue10 Suppl
dc.identifier.legacycoverpagehttps://escholarship.umassmed.edu/wfc_pp/267
dc.identifier.contextkey383564
html.description.abstract<p>In recent years, performance assessments have become increasingly popular in medical education. While the term “performance assessment” can be applied to many different types of assessments, in medical education this term usually refers to some sort of simulated patient encounter, such as an objective structured clinical examination (OSCE) or a computer simulation of an encounter. These types of assessments appeal to many educators because the tasks or items used are often seen as more realistic than items on multiple-choice examinations. However, this increased “realism” or apparent authenticity comes at a cost—performance examinations are typically more time-consuming and expensive both to administer and to score. On an OSCE, each encounter with a standardized patient is typically scored as a single item, often resulting in an examinee's completing only four to eight items in a two-hour testing period. In contrast, an examinee might complete 100 to 150 items during a two-hour multiple-choice examination. The fact that performance examinations are typically relatively short means that test users must pay particular attention to the reliability and validity of test scores. Generalizability theory provides a framework for estimating the relative magnitudes of various sources of error in a set of scores. In most performance assessments, both items and raters are potential sources of error. Generalizability theory allows estimation of the error associated with each of these sources separately, as well as the relevant interaction effects. In a generalizability study (G study), the variance in a set of scores is partitioned in a manner similar to that used in the analysis of variance. However, in a G study the emphasis is not on testing for statistical significance, but rather on assessing the relative magnitudes of the variance components. Depending on the study design, different variance components can be estimated. Once the variance components are estimated, additional analyses can be conducted. The purpose of the present study was to examine the impacts of different G-study designs.</p>
dc.identifier.submissionpathwfc_pp/267
dc.contributor.departmentMeyers Primary Care Institute
dc.contributor.departmentDepartment of Family Medicine and Community Health
dc.contributor.departmentOffice of Medical Education
dc.source.pagesS21-4


This item appears in the following Collection(s)

Show simple item record