Screen-based versus immersive virtual training platforms for improving public speaking
The best way to improve public speaking is through audience-based practice and immediate feedback. To that effect, I implemented an immersive VR (Virtual Orator) training in our introductory public-speaking course. Due to cost and space concerns, I propose testing an alternate, laptop-accessible web-based program for student practice in Spring 2019.
Effective public speaking is a fundamental communication skill that can dramatically improve academic, professional, and personal life. My Introduction to Human Communication course spends 14 weeks training students in effective public speaking. Over 400 students are split into 24 small group sections to practice for delivering 3 speeches over the semester. In the past year I have been testing an immersive virtual reality software (Virtual Orator) as a practice tool. Virtual Orator features lifelike audiences made up of avatars who can respond to eye gaze, head movement, and vocalics, and delivers personalized feedback in terms of eye gaze, vocalics, and speech rate. An initial test of the system in Fall 2018, showed that students (n = 78) who practiced in VR demonstrated significant increase in final speech grades, t(163) = 2.53, p = .01, compared to those who had not (n = 87). A second test in Spring 2019 showed similar effects. However, the hardware and software are expensive, and must be accessed via an on-campus lab. I am interested in comparing more affordable and accessible software with similar avatar responses and immediate feedback capabilities. PitchVantage is a software that is very similar to the VR environment we currently use, but is delivered on a flat (non-immersive) laptop screen. I propose comparing the Virtual Orator and PitchVantage software as practice aids during the Spring 2020 COMM100 course. Student public speaking skills and speech grades will be the learning outcomes, mediated by individual feedback scores derived from the software.
Currently, students receive feedback on their speeches by trained graduate instructors, who film and provide feedback for student speeches using a detailed rubric based on the National Communication Association core speaking competency list. The existing VR package being used in the course, Virtual Orator, delivers additional, passively collected personalized feedback in terms of eye gaze, vocalics, and speech rate at the end of the speech practice. The software I am interested in comparing to Virtual Orator, PitchVantage, provides automated feedback on 25 elements of presentation delivery including; pitch variability, pace, volume, pace variability, pauses, long pauses, volume variability, engagement, verbal distractors, eye contact and 15 elements of content. Although both the core competency and the nonverbal feedback variables collected by the software packages have both been identified as critical for public speaking success, their relationship to student learning and outcomes has never been assessed together as part of a course. I propose comparing the software-based feedback with the instructor feedback to determine which, if either, are promoting public speaking improvement in subsequent speeches. As students give three speeches over the course of the semester, skill trajectory could be determined as coming from either the instructor or the digital feedback. Knowing which is more effective would help streamline instructor time and effort, as well as providing an efficient way to compare feedback source and type in assessing student outcomes.
Learn more about Allison Eden (https://comartsci.msu.edu/our-people/allison-eden)