Author ORCID Identifier

Lauren Zarzar: https://orcid.org/0000-0002-1177-3602

Abstract

Purpose/Hypothesis: Accreditation criteria mandate the evaluation of student technical skills. The emerging need for DPT programs to deliver course content remotely and subsequently assess student clinical skills highlights the lack of research surrounding faculty rating consistency when evaluations occur virtually. This study aimed to investigate rating consistency among faculty testers when assessing clinical skills virtually. The primary questions were: (1) is there faculty rating consistency for virtual practical assessments, (2) are there any trends that impact faculty rating of virtual practical performance?

Number of Subjects: 623

Materials and Methods: Faculty utilized check list rubrics based on Miller’s Pyramid of Assessment to evaluate students’ virtual practical performances. During the case-based virtual practical performance students were required to simulate a face to face patient encounter, or verbally describe how to perform skills during a patient encounter appropriately. A convenience sample of 623 individual student scores across the DPT curriculum were collected and utilized. Post hoc analysis and One-Way ANOVA was employed to determine differences between faculty raters.

Results: There were 4 to 7 faculty raters per course, with faculty testing 7 to 13 students on average. Students were expected to complete the virtual practical performance within; 11-20 minutes (47.5%), 21-30 minutes (25.5%), and 41-60 minutes (15%). Individual course analysis revealed some differences in faculty rating of the students’ virtual practical skills for 6 of the 13 courses. One course in the first year and five courses in the second year of the curriculum had significant differences in faculty rating of student virtual skills performances (p=0.018, p=0.001, p=0.045, p=0.013, p=0.004, p=0.001). Overall, the students’ scores earned from the faculty raters were consistent when compared to traditional face to face practical scores.

Conclusions: Faculty rating of students’ virtual skills performance were more consistent in the first year of the DPT curriculum, with more variability in rating for the program’s second year courses. There is the possibility that more faculty rating errors during the second year of the curriculum may have impacted how the students were rated. Even with the differences in faculty rating, virtual skills practicals may be an acceptable option for DPT programs.

Clinical Relevance: The recent Coronavirus 2019 (COVID-19) pandemic has increased the need for innovative virtual methods for testing technical skills taught in physical therapy programs. Assessing if consistency between faculty raters can be maintained in the virtual environment is essential in determining the effectiveness of this form of examination. The results of this study indicate that consistency appears to be better maintained earlier in the curriculum, the reason for this trend is unknown. Some difference in how faculty rated students could be attributed to the difference in the courses. This study will be significant in helping to show that effective faculty rating of students’ performance of virtual technical skills is possible.

Comments

Poster presented to the American Physical Therapy Association (APTA) Combined Section Meeting (CSM), held in San Antonio, Texas, February 2-5, 2021

References:

  • Khamisa, K., Halman, S., Desjardins, I., Jean, M., & Pugh D. (2018). The implementation and evaluation of an e-Learning training module for objective structured clinical examination raters in Canada. J Educ Eval Health Prof, 15, 18. DOI: https://doi.org/10.3352/jeehp.2018.15.18
  • Kogan, J.R., Hatala, R., Hauer, K.E. et al. Guidelines: The do’s, don’ts and don’t knows of direct observation of clinical skills in medical education. Perspect Med Educ 6, 286–305 (2017). https://doi.org/10.1007/s40037-017-0376-7
  • Gingerich, A., Ramlo, S.E., van der Vleuten, C.P.M. et al. Inter-rater variability as mutual disagreement: identifying raters’ divergent points of view. Adv in Health Sci Educ 22, 819–838 (2017). https://doi.org/10.1007/s10459-016-9711-8
  • Tunney, N., & Perlow, E. (2017) DPT Student and Examiner Perceptions of an Innovative Model for Assessment of Neuromuscular Clinical Competence in a Professional Physical Therapist Education Program, Journal of Physical Therapy Education, 31(3), 91-99.
  • Looney A, Cumming J, van Der Kleij F, Harris K. Reconceptualising the role of teachers as assessors: teacher assessment identity. Assess Educ Princ Policy Pract. 2017; 1–26. https://doi.org/10.1080/0969594X.2016.1268090.
  • Kachingwe AF, Phillip B, Beling J. Videotaping practical examinations in physical therapist education: does it foster student performance, self-assessment, professionalism, and improve instructor grading? J Phys Ther Educ. 2015;29(1):25-33
  • Raj JM, Thorn PM. A Faculty Development Program to Reduce Rater Error on Milestone-Based Assessments. J Grad Med Educ. 2014;6(4):680‐685. doi:10.4300/JGME-D-14-00161.1
  • Feldman M, Lazzara EH, Vanderbilt AA, DiazGranados D. Rater training to support high-stakes simulation-based assessments. J Contin Educ Health Prof. 2012;32(4):279‐286. doi:10.1002/chp.21156
  • Reckman J, Gofton W, Dudek N, Gofton T, Hamstra SJ. Entrustability scales: outlining their usefulness for competency based clinical assessment. Acad Med. 2016;91:186–90

Share

COinS