Author ORCID Identifier
Lauren Zarzar: https://orcid.org/0000-0002-1177-3602
Abstract
Purpose/Hypothesis: Accreditation criteria mandate the evaluation of student technical skills. The emerging need for DPT programs to deliver course content remotely and subsequently assess student clinical skills highlights the lack of research surrounding faculty rating consistency when evaluations occur virtually. This study aimed to investigate rating consistency among faculty testers when assessing clinical skills virtually. The primary questions were: (1) is there faculty rating consistency for virtual practical assessments, (2) are there any trends that impact faculty rating of virtual practical performance?
Number of Subjects: 623
Materials and Methods: Faculty utilized check list rubrics based on Miller’s Pyramid of Assessment to evaluate students’ virtual practical performances. During the case-based virtual practical performance students were required to simulate a face to face patient encounter, or verbally describe how to perform skills during a patient encounter appropriately. A convenience sample of 623 individual student scores across the DPT curriculum were collected and utilized. Post hoc analysis and One-Way ANOVA was employed to determine differences between faculty raters.
Results: There were 4 to 7 faculty raters per course, with faculty testing 7 to 13 students on average. Students were expected to complete the virtual practical performance within; 11-20 minutes (47.5%), 21-30 minutes (25.5%), and 41-60 minutes (15%). Individual course analysis revealed some differences in faculty rating of the students’ virtual practical skills for 6 of the 13 courses. One course in the first year and five courses in the second year of the curriculum had significant differences in faculty rating of student virtual skills performances (p=0.018, p=0.001, p=0.045, p=0.013, p=0.004, p=0.001). Overall, the students’ scores earned from the faculty raters were consistent when compared to traditional face to face practical scores.
Conclusions: Faculty rating of students’ virtual skills performance were more consistent in the first year of the DPT curriculum, with more variability in rating for the program’s second year courses. There is the possibility that more faculty rating errors during the second year of the curriculum may have impacted how the students were rated. Even with the differences in faculty rating, virtual skills practicals may be an acceptable option for DPT programs.
Clinical Relevance: The recent Coronavirus 2019 (COVID-19) pandemic has increased the need for innovative virtual methods for testing technical skills taught in physical therapy programs. Assessing if consistency between faculty raters can be maintained in the virtual environment is essential in determining the effectiveness of this form of examination. The results of this study indicate that consistency appears to be better maintained earlier in the curriculum, the reason for this trend is unknown. Some difference in how faculty rated students could be attributed to the difference in the courses. This study will be significant in helping to show that effective faculty rating of students’ performance of virtual technical skills is possible.
Recommended Citation
Hidalgo, L. O., Godoy Bobbio, T., Marinas, R., Salmon, C., Pilgrim, L., & Figueroa Wallace, M. (2022). The New Frontier: Can Faculty be Consistent When Rating Clinical Skills Virtually?. Retrieved from https://soar.usa.edu/education/31
Comments
Poster presented to the American Physical Therapy Association (APTA) Combined Section Meeting (CSM), held in San Antonio, Texas, February 2-5, 2021
References: