Virtual Reality's Persistent Human Factors Challenges

 Getting into the action of 'Super Hot' on the Oculus Rift + Touch

Getting into the action of 'Super Hot' on the Oculus Rift + Touch

Virtual Reality is arguably the hottest and fastest growing category in consumer electronics and entertainment today. Since the 1990’s the platform has promised an unprecedented opportunity to experience environments and ideas outside of our physical limitations. Technology, investment, and computing power are just now starting to catch up to deliver on this promise and recent offerings in devices, games, and cinema are becoming more widely available to the public - not to mention thrilling to experience (I especially like First Contact, Super Hot, and Henry the Hedgehog on Oculus Rift).

Yet the industry is still working to deliver a fully consumer-ready VR experience. VR continues to face multiple, sizable usability and human factors challenges in order to make it accessible to and enjoyable by mass audiences. Below are a few of its biggest and most persistent challenges.


1. An approachable looking device

‘It just looks so tech-y and scary.....' 

Let's face it. The state of the art VR rigs with their big, black, wiry head-mounted displays tethered to large, powerful gaming computers and multiple room sensors can look a little scary. The system covers the face with a dark and blinding mask and the wires tethered to the head and hands can evokes images of an EEG machine by way of basement-gamer captivity. Not exactly an unappealing image for the average consumer.

Current HMD design suggestS submission, not control

From a human factors perspective, the affordances of the typical head-mounted display (HMD) - a dark, blinding mask covering the eyes - suggest submission as opposed to control of the body and senses, the inverse of the established heuristic in human-centered computing interfaces. Users are supposed to identify a sense of comfort and empowerment in their devices - not disablement and dependency.

 A variety of head mounted display designs  From top left: ViewMaster (1963), Oculus Rift (2016), Google Daydream (2016), Snap Spectacles (2016)

A variety of head mounted display designs

From top left: ViewMaster (1963), Oculus Rift (2016), Google Daydream (2016), Snap Spectacles (2016)

The industry is certainly working on this obstacle, but we're not quite there yet. The Google Daydream (2016) has offered arguably the most consumer-friendly HMD design to date. With its soft colors, t-shirt materials, and rounded edges, it's the right step towards on-the-surface approachability while we wait for the sensor technology and processing power to get slimmer. In the meantime, the current HMD industrial designs by Oculus, HTC, and Sony still suggest an experience that is dark, closeted, isolating, and unknown.

Inspiration from novelty IDs

Looking to history, the classic ViewMaster from the 1960s suggests an alternative design by showing users a preview of the experience by way of the media disc peeking out, allowing users some understanding of the experience they will submit to. Towards the augmented / mixed reality end of the spectrum, Snap's Spectacles cleverly integrated its camera tech into a form factor that is at once comfortable, familiar, and flexible for users to wear. This design feels more like the future of HMDs that consumers will be ready for and feel comfortable trying out on their own.


2. Helping users know what to do in VR

‘Okay, what do I do now?’

When observing VR use, you will likely notice that users frequently need assistance with how to get started in the world, how to use the controllers and/or virtual hands, discovering menus and options, and what to do to keep engaged in a VR game experience.

A lack of script or mental model

One of the hallmarks of a uniquely VR experience is that the user is the primary agent of action and therefore free from any linear or predetermined script. This is both VR's strength and weakness in terms of offering an engaging, usable user experience.

In VR's innovative non-linear context, a user is forced to rely on a mental model of next steps and what they believe they can and should be able to do in this environment such as look around, move a certain distance, or pick up an object. A consumer who is totally new to VR and has no understanding of the current limits of the technology or an established mental model of steps is left to try to figure out completely on their own what they should do in the environment (their goal), what skills they will need to use, what tasks to complete, and how.

Explicit direction is necessary but rare

This is why clear and explicit direction is so critical in the initial moments of trying VR. Instead of free exploration, explicit direction is required very early so that users can take the first steps to learn the what is possible in the environment, know what to do, and feel confident enough to continue to explore and expand their skills and achievements on their own. Unfortunately, these critical and explicit directional prompts rarely occur. 

 The tutorial in Oculus Rift + Touch teaches controller use & functions

The tutorial in Oculus Rift + Touch teaches controller use & functions

 Meeting the adorable host character in First Contact, an introductory experience in Oculus Rift + Touch

Meeting the adorable host character in First Contact, an introductory experience in Oculus Rift + Touch

Oculus Touch has an impressive interactive 'first use experience' demo called First Contact, presented immediately after an initial controller tutorial, which lands the user in a densely packed graphical environment encouraging the user to look around at all the fantastic objects and details. The demo quickly introduces a host character who guides the user to start interacting with objects he hands them directly, and prompts user to complete small first tasks - all extremely helpful in getting engaged in the environment, completing small tasks and using the controls as quickly as possible. 

But while the demo achieves the goal of getting the user quickly immersed and builds a relationship with the host character, the lack of ongoing explicit direction can sometimes be puzzling, leaving the users to guess exactly what the character wants you to do next - and how - making it easy to lose interest. For example, an interaction prompt may be located off screen or behind the user, with the character looking in the general direction but not telling the user about the prompt directly, as in this case in this screen shot. 

2D game play heuristics still apply

In Game Usability Heuristics (PLAY) For Evaluating and Designing Better Games: The Next Iteration, Desurvire and Wiberg established a list of comprehensive and helpful guidelines for developing enjoyable and usable game play. Their work is widely cited in Human-Computer Interaction (HCI) literature and is used throughout the games communities. I've noticed that at least four of their heuristics speak directly to what is currently lacking in much of the VR game play today, due to its novel user experience and rapidly evolving capabilities in control and immersion:

  • The player does not need to read the manual or documentation to play.
  • The player does not need to access the tutorial in order to play.
  • The first ten minutes of play and player actions are painfully obvious and should result in immediate and positive feedback for all types of players.
  • The game goals are clear. The game provides clear goals, presents overriding goals early as well as short term goals throughout game play.

Even more than with 2D gaming, interacting in VR platforms and games require a dead simple, explicit, direct, and scaffolded learning framework, so the user can first get oriented not just with the physical area ('playpen') or virtual hands - as in typical VR tutorial - but with the overall technology and total game play possibilities and activities. Only then can users quickly get engaged and stay engaged in the game - through understanding game goals, facing ongoing challenge, expanding their skills and abilities, and experiencing accomplishment and reward on their own.

Facilitators are making up for an approachable experience 

 A VR lab facilitator helps a user adjust her head-mounted display

A VR lab facilitator helps a user adjust her head-mounted display

With any VR exhibit you'll see a facilitator, or equipment guide, on site to help users put on the gear comfortably and safely interact with the equipment assisting them to avoid punching walls or tripping over rig cords. But once the user has the equipment on and is in the game you'll notice the facilitator's role quickly expands to answering questions about game play, directing users to interactive areas or tasks in the game, and generally helping players to stay engaged and have a rewarding experience. This facilitation essentially makes up for the lack of explicit directions in-game, and the facilitator acts as stand-in for a consumer ready,  jump-in and play VR gaming experience.


3. Managing the physical body, space, and real-world objects while in VR

‘I don't feel comfortable sitting down.’

The immersion IS real

Creating a sense of immersion and 'presence' - the extent to which a user feels as though he or she is inside, or a part of, the virtual realm - continues to be a primary goal for VR engineers and designers. This is especially important when representing the user through an in-game avatar or human-like hands that aren't quite human (which can lead a user right out of a sense of presence and straight into unpleasant uncanny valley). Mitigating a sense of disembodiment balanced with increasing a user's feelings of actually 'being there' is a very current and juicy challenge for VR designers.

Fortunately, the VR industry can leverage a vast number of ways to observe, feel, and measure virtual presence as well as established heuristics for increasing a sense of in-game immersion, thanks to decades of history developing sims (simulation) games. And it's working. Some of the latest interactive VR experiences work hard to lend the user sense of visiting the realm as opposed to just seeing it as an observer, even if there remains the occasional sense of disembodiment from the control 'hands' or in-game avatar.

It is the design and development of this spectacular and singular, intense user focus that enables a realistic, embodied sense of presence in VR.  The more the user is engaged in the virtual world, the more they should be able to forget and leave the physical one behind. 

Yet while VR is designed to produce a sublime level of immersion, we must recognize that the real effect of this singular focus - physically blindfolding a user’s eyes from their surroundings - presents several unique challenges. Not the least of which is the potential to induce feelings of isolation, dependency, and fear, especially for women and people who may feel physically vulnerable, particularly in social and public situations.


Users are navigating two worlds, not one 

 Sit on these comfy pillows in Oculus Home, and you'll fall down. They're not really there!

Sit on these comfy pillows in Oculus Home, and you'll fall down. They're not really there!

While the user's brain is working hard to experience a singular virtual world, in reality the user’s physical body including hands, limbs, and nervous system are still navigating the physical world. When physical objects and furniture appear in the virtual world with no clear indicator of which objects are interactive, tricks and illusions are plenty and can easily disorient users. Empirically and practically, the user is forced to reconcile two worlds at once. This results in a strange reverse culture shock where users are forced to manage the real physical world - the one users know how to navigate the best - blindly as they learn the virtual one.

In order to understand this split-reality experience, researchers from the field of Human-Computer Interaction (HCI) must evolve quickly. Lucy Suchman's foundational theory of plans derived from situated action has been a guidepost for how to understand and design for user intent. This framework of plans and situated actions seeks to avoid reducing user actions into simply behavior or mental, but instead investigates actions in situ.

But how can we leverage the in situ user framework when users' behaviors and mental activities are actually situated in two different realities?

The development of Augmented Reality - still a few years away - suggests a future where the two realities can be combined and navigated as one. But until then, VR is being sold and shipped to consumers at their local Best Buy and exhibited in local movie theaters. Fortunately lab engineers, researchers, and creatives are working hard on ways to reconcile this jarring and sometimes painful dichotomy.

In their 2015 research A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays, McGill et al. from the University of Glasgow identified usability issues surrounding navigating and manipulating physical objects while in VR in a survey of 108 VR users. As a result they proposed an 'engagement-dependent virtual reality' concept that allowed objects and people from the physical world to enter the virtual realm, on an as-needed basis by the user. A novel missing-link concept that speaks to user agency and reconciliation of the two worlds. 

 Top: Minimal blending (reality around user’s hands). Middle: Partial blending (all interactive objects). Bottom: Full blending (all of reality). From  A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays , McGill et al. 2015

Top: Minimal blending (reality around user’s hands). Middle: Partial blending (all interactive objects). Bottom: Full blending (all of reality). From A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays, McGill et al. 2015


4. Mitigating the effect of motion sickness (still!)

'Okay I'm starting to get sick.'

This is probably VR's best known issue when it comes to human factors. Virtual reality sickness is a special kind of motion sickness caused by a disparity between the user's sense and perception of movement and actual physical movement. For those who experience it (including the author of this post) it is still a very real obstacle to VR and will stop a susceptible user right in her tracks.

VR designers and engineers are working hard to try to make improvements to motion design in games in attempting to solve this very difficult and inherent problem. But what remains so important about this persistent issue is that it is a uniquely diversity-related issue; people who are  more susceptible to VR sickness includes women, children, people experiencing illness, and adults over 50.

Because VR promises a future enhancing one's ability to travel to new locations and experience new environments when they would otherwise be physically unable to do so, is imperative that the VR community quickly address this issue in order for consumers of all ages, abilities, and genders to benefit equally from this future.

In the meantime, if you are prone to motion sickness and still want to experience VR you can check out these homespun tips and tricks for how to combat VR nausea, including taking over-the-counter Dramamine.

None of these are easy problems to solve. But as commercial VR technology, development, and design accelerate it is critical that we keep at the top of mind the uniquely human and social impact of the technology, and make sure the devices are as consumer-ready as possible and accessible to everyone.