The Educational Video Rubric: Design, Data Collection, and Implementation

Presenter Information

John LeeFollow

Faculty Advisor Name

Dr. Keston Fulcher

Department

Department of Graduate Psychology

Description

People spent 3.3 trillion minutes in video conference meetings on the popular site Zoom in 2020 (backlinko.com). The website statista.com reported that in February of 2020, 30,000 hours of content was uploaded to YouTube, per hour! Video is now routinely used in the K-12 classroom, higher education, business, government, politics, and the list goes on. Videos are now commonly used to submit course work, provide lectures, present at conferences, give survey instructions, or introduce new programs and faculty. The advantages of video as a form of communication are diverse, ranging from public health and safety to ease of use and accessibility. I have been working closely with a faculty member at the Center for Assessment and Research Studies (CARS) to address this new form of communication. We are in the process of developing an educational video rubric that can be utilized to assess and promote the production of high-quality educational videos. The first step in the development of this tool was to acknowledge the need for more effective videos and the ability to create them. Next, by leveraging subject matter expertise in the design, data collection, and implementation of performance assessments, our research team produced an evidenced based educational video rubric. To develop the rubric, we analyzed hundreds of videos, interviewed internal and external experts in video production, reviewed related literature, and discussed the key elements essential to our rubric. This process resulted in six criteria (audio, lighting, composition, visual dynamism, image quality, delivery) that are assessed across four levels of proficiency (beginning, developing, good, exemplary). Each criterion was researched based on its relevancy, validated by one internal and one external expert in each content domain, and tested through formal rater training. The formal rater training was conducted over seven hours with seven independent raters who are graduate students or faculty from CARS. Complementing the rubric’s behavioral anchors, video examples were provided for each criterion across all four levels of the rubric. The use of video to convey the meaning of a rubric element is believed to have assisted in the effective delivery of the rater training and increased reliability. After rater training was complete, each rater was asked to rate 12 five-minute videos. Initial analysis has indicated high reliability (Cronbach’s Alpha of .98) across raters. Although the high reliability may be related to the use of video examples to distinguish between different levels of each criterion, further analysis and additional iterations of rater training will be needed to determine if these results are replicable. As our research team moves toward publication of the rubric and analysis, additional validity evidence is being collected using a G-theory analysis. This evidence will be used to support any potential updates to the rubric or rater training. After publication, we intend to use the rubric to evaluate video quality and assist others in creating better quality educational video content.

This document is currently not available here.

Share

COinS
 

The Educational Video Rubric: Design, Data Collection, and Implementation

People spent 3.3 trillion minutes in video conference meetings on the popular site Zoom in 2020 (backlinko.com). The website statista.com reported that in February of 2020, 30,000 hours of content was uploaded to YouTube, per hour! Video is now routinely used in the K-12 classroom, higher education, business, government, politics, and the list goes on. Videos are now commonly used to submit course work, provide lectures, present at conferences, give survey instructions, or introduce new programs and faculty. The advantages of video as a form of communication are diverse, ranging from public health and safety to ease of use and accessibility. I have been working closely with a faculty member at the Center for Assessment and Research Studies (CARS) to address this new form of communication. We are in the process of developing an educational video rubric that can be utilized to assess and promote the production of high-quality educational videos. The first step in the development of this tool was to acknowledge the need for more effective videos and the ability to create them. Next, by leveraging subject matter expertise in the design, data collection, and implementation of performance assessments, our research team produced an evidenced based educational video rubric. To develop the rubric, we analyzed hundreds of videos, interviewed internal and external experts in video production, reviewed related literature, and discussed the key elements essential to our rubric. This process resulted in six criteria (audio, lighting, composition, visual dynamism, image quality, delivery) that are assessed across four levels of proficiency (beginning, developing, good, exemplary). Each criterion was researched based on its relevancy, validated by one internal and one external expert in each content domain, and tested through formal rater training. The formal rater training was conducted over seven hours with seven independent raters who are graduate students or faculty from CARS. Complementing the rubric’s behavioral anchors, video examples were provided for each criterion across all four levels of the rubric. The use of video to convey the meaning of a rubric element is believed to have assisted in the effective delivery of the rater training and increased reliability. After rater training was complete, each rater was asked to rate 12 five-minute videos. Initial analysis has indicated high reliability (Cronbach’s Alpha of .98) across raters. Although the high reliability may be related to the use of video examples to distinguish between different levels of each criterion, further analysis and additional iterations of rater training will be needed to determine if these results are replicable. As our research team moves toward publication of the rubric and analysis, additional validity evidence is being collected using a G-theory analysis. This evidence will be used to support any potential updates to the rubric or rater training. After publication, we intend to use the rubric to evaluate video quality and assist others in creating better quality educational video content.