In a recent study, an international team of researchers from Australia, New Zealand, and India used common facial expressions, such as a smile and a frown, to interact and trigger specific actions in VR environments, with surprising results. “Overall, we expected the handheld controllers to perform better as they are a more intuitive method than facial expressions,” noted Professor Mark Billinghurst from the University of South Australia, one of the researchers involved in the experiment, in a news release. “However people reported feeling more immersed in the VR experiences controlled by facial expressions.”
Intuitive Immersion
The researchers, led by University of Queensland researcher Dr. Arindam Dey, who works with Prof. Billinghurst at the Australian Research Centre for Interactive and Virtual Environments, argued that most VR interfaces require physical interactions using handheld controllers. In their paper, the researchers note that they set out to use a person’s expressions to manipulate objects in VR without using a handheld controller or touchpad. They devised a mechanism to identify various facial expressions, including anger, happiness, and surprise, with the help of an Electroencephalogram (EEG) headset. For instance, a smile was used to trigger the command to move the user’s virtual avatar, while a frown would trigger a stop command and a clench was used to perform a predefined action, instead of using a handheld controller to control the avatar, explained Prof. Billinghurst in the press release. As part of the research, the group designed three virtual environments, two that were happy and scary and a third that was neutral. This enabled the researchers to measure each participant’s cognitive and physiological state while they were immersed in each of the three scenarios. In the happy environment, participants smiled to move through a park to catch butterflies with a clenched jaw and frowned to stop. Similarly, in the scary environment, the same expressions were used to navigate through an underground base to shoot zombies, while in the neutral environment, the facial expressions helped users move across a workshop, picking up various items. The researchers then collated the neurological and physiological effects of the user’s interaction in the three VR environments using facial expressions and compared them with interactions conducted via commonly used handheld controllers. Prof. Billinghurst noted that at the end of the experiment, the researchers concluded that although relying on facial expressions alone in a VR setting is hard work for the brain, it gives participants a more immersive and realistic experience than using handheld controllers.
A Mere Gimmick?
The researchers contend that interacting with VR through facial expressions not only provides a novel way to use VR, but the technique will also make it more accessible. By ditching handheld controllers, people with disabilities, from those with motor neuron disease to amputees, will finally be able to experience VR. Even as they work to make it more usable, the researchers suggest the technology can also be used to complement handheld controllers, especially for VR environments where facial expressions are a more natural form of interaction. “Most of human communication is actually body language [and] facial microexpressions that we’re often unaware of, so proper facial tracking can surely take virtual social interactions to a whole new level.” Lucas Rizzotto, intrepid creator and YouTuber, told Lifewire over email. Rizzotto, whose most famous creation is a VR time machine, believes facial tracking definitely has a role to play when it comes to social VR and Augmented Reality (AR) and the metaverse, though he has his reservations about it gaining mainstream acceptance. “As far as purely controlling experiences with your face, I’m sure there’s some creative possibilities here when it comes to art and accessibility,” Rizzotto opined. “But it could also easily just end up being a gimmick when we have so many more reliable forms of input.”