Show simple item record

dc.contributor.authorMazor, Kathleen M.
dc.contributor.authorZanetti, Mary L.
dc.contributor.authorAlper, Eric J.
dc.contributor.authorHatem, David S.
dc.contributor.authorBarrett, Susan V.
dc.contributor.authorMeterko, Vanessa
dc.contributor.authorGammon, Wendy L.
dc.contributor.authorPugnaire, Michele P.
dc.date2022-08-11T08:11:04.000
dc.date.accessioned2022-08-23T17:31:29Z
dc.date.available2022-08-23T17:31:29Z
dc.date.issued2007-04-14
dc.date.submitted2007-10-22
dc.identifier.citationMed Educ. 2007 Apr;41(4):331-40. <a href="http://dx.doi.org/10.1111/j.1365-2929.2006.02692.x">Link to article on publisher's site</a>
dc.identifier.issn0308-0110 (Print)
dc.identifier.doi10.1111/j.1365-2929.2006.02692.x
dc.identifier.pmid17430277
dc.identifier.urihttp://hdl.handle.net/20.500.14038/50747
dc.description.abstractINTRODUCTION: Professionalism is fundamental to the practice of medicine. Objective structured clinical examinations (OSCEs) have been proposed as appropriate for assessing some aspects of professionalism. This study investigated how raters assign professionalism ratings to medical students' performances in OSCE encounters. METHODS: Three standardised patients, 3 doctor preceptors, and 3 lay people viewed and rated 20 videotaped encounters between 3rd-year medical students and standardised patients. Raters recorded their thoughts while rating. Qualitative and quantitative analyses were conducted. Comments about observable behaviours were coded, and relative frequencies were computed. Correlations between counts of categorised comments and overall professionalism ratings were also computed. RESULTS: Raters varied in which behaviours they attended to, and how behaviours were evaluated. This was true within and between rater type. Raters also differed in the behaviours they consider when providing global evaluations of professionalism. CONCLUSIONS: This study highlights the complexity of the processes involved in assigning ratings to doctor-patient encounters. Greater emphasis on behavioural definitions of specific behaviours may not be a sufficient solution, as raters appear to vary in both attention to and evaluation of behaviours. Reliance on global ratings is also problematic, especially if relatively few raters are used, for similar reasons. We propose a model highlighting the multiple points where raters viewing the same encounter may diverge, resulting in different ratings of the same performance. Progress in assessment of professionalism will require further dialogue about what constitutes professional behaviour in the medical encounter, with input from multiple constituencies and multiple representatives within each constituency.
dc.language.isoen_US
dc.relation<a href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=17430277&dopt=Abstract">Link to article in PubMed</a>
dc.relation.urlhttp://dx.doi.org/10.1111/j.1365-2929.2006.02692.x
dc.subjectClinical Competence
dc.subjectCommunication
dc.subject*Education, Medical, Undergraduate
dc.subjectHumans
dc.subjectMassachusetts
dc.subjectPhysician-Patient Relations
dc.subjectStudents, Medical
dc.subjectLife Sciences
dc.subjectMedicine and Health Sciences
dc.subjectWomen's Studies
dc.titleAssessing professionalism in the context of an objective structured clinical examination: an in-depth study of the rating process
dc.typeJournal Article
dc.source.journaltitleMedical education
dc.source.volume41
dc.source.issue4
dc.identifier.legacycoverpagehttps://escholarship.umassmed.edu/wfc_pp/275
dc.identifier.contextkey383572
html.description.abstract<p>INTRODUCTION: Professionalism is fundamental to the practice of medicine. Objective structured clinical examinations (OSCEs) have been proposed as appropriate for assessing some aspects of professionalism. This study investigated how raters assign professionalism ratings to medical students' performances in OSCE encounters.</p> <p>METHODS: Three standardised patients, 3 doctor preceptors, and 3 lay people viewed and rated 20 videotaped encounters between 3rd-year medical students and standardised patients. Raters recorded their thoughts while rating. Qualitative and quantitative analyses were conducted. Comments about observable behaviours were coded, and relative frequencies were computed. Correlations between counts of categorised comments and overall professionalism ratings were also computed.</p> <p>RESULTS: Raters varied in which behaviours they attended to, and how behaviours were evaluated. This was true within and between rater type. Raters also differed in the behaviours they consider when providing global evaluations of professionalism.</p> <p>CONCLUSIONS: This study highlights the complexity of the processes involved in assigning ratings to doctor-patient encounters. Greater emphasis on behavioural definitions of specific behaviours may not be a sufficient solution, as raters appear to vary in both attention to and evaluation of behaviours. Reliance on global ratings is also problematic, especially if relatively few raters are used, for similar reasons. We propose a model highlighting the multiple points where raters viewing the same encounter may diverge, resulting in different ratings of the same performance. Progress in assessment of professionalism will require further dialogue about what constitutes professional behaviour in the medical encounter, with input from multiple constituencies and multiple representatives within each constituency.</p>
dc.identifier.submissionpathwfc_pp/275
dc.contributor.departmentDepartment of Medicine
dc.contributor.departmentMeyers Primary Care Institute
dc.contributor.departmentDepartment of Family Medicine and Community Health
dc.contributor.departmentOffice of Office of Educational Affairs
dc.source.pages331-40


This item appears in the following Collection(s)

Show simple item record