Virtual Reality's Persistent Human Factors Challenges

Getting into the action of 'Super Hot' on the Oculus Rift + Touch

Getting into the action of 'Super Hot' on the Oculus Rift + Touch

Virtual Reality is arguably the hottest and fastest growing category in consumer electronics and entertainment today. Since the 1990’s the platform has promised an unprecedented opportunity to experience environments and ideas outside of our physical limitations. Technology, investment, and computing power are just now starting to catch up to deliver on this promise and recent offerings in devices, games, and cinema are becoming more widely available to the public - not to mention thrilling to experience (I especially like First Contact, Super Hot, and Henry the Hedgehog on Oculus Rift).

Yet the industry is still working to deliver a fully consumer-ready VR experience. VR continues to face multiple, sizable usability and human factors challenges in order to make it accessible to and enjoyable by mass audiences. Below are a few of its biggest and most persistent challenges.

 

1. An approachable looking device

‘It just looks so tech-y and scary.....' 

Let's face it. The state of the art VR rigs with their big, black, wiry head-mounted displays tethered to large, powerful gaming computers and multiple room sensors can look a little scary. The system covers the face with a dark and blinding mask and the wires tethered to the head and hands can evokes images of an EEG machine by way of basement-gamer captivity. Not exactly an unappealing image for the average consumer.

Current HMD design suggestS submission, not control

From a human factors perspective, the affordances of the typical head-mounted display (HMD) - a dark, blinding mask covering the eyes - suggest submission as opposed to control of the body and senses, the inverse of the established heuristic in human-centered computing interfaces. Users are supposed to identify a sense of comfort and empowerment in their devices - not disablement and dependency.

A variety of head mounted display designs From top left: ViewMaster (1963), Oculus Rift (2016), Google Daydream (2016), Snap Spectacles (2016)

A variety of head mounted display designs

From top left: ViewMaster (1963), Oculus Rift (2016), Google Daydream (2016), Snap Spectacles (2016)

The industry is certainly working on this obstacle, but we're not quite there yet. The Google Daydream (2016) has offered arguably the most consumer-friendly HMD design to date. With its soft colors, t-shirt materials, and rounded edges, it's the right step towards on-the-surface approachability while we wait for the sensor technology and processing power to get slimmer. In the meantime, the current HMD industrial designs by Oculus, HTC, and Sony still suggest an experience that is dark, closeted, isolating, and unknown.

Inspiration from novelty IDs

Looking to history, the classic ViewMaster from the 1960s suggests an alternative design by showing a users a preview of the experience by way of the media disc peeking out, allowing users some understanding of the experience they will submit to. Towards the augmented / mixed reality end of the spectrum, Snap's Spectacles cleverly integrated its camera tech into a form factor that is at once comfortable, familiar, and flexible for users to wear. This design feels more like the future of HMDs that consumers will be ready for and feel comfortable trying out on their own.

 

2. Helping users know what to do in VR

‘Okay, what do I do now?’

When observing VR use, you will likely notice that users frequently need assistance with how to get started in the world, how to use the controllers and/or virtual hands, discovering menus and options, and what to do to keep engaged in a VR game experience.

A lack of script or mental model

One of the hallmarks of a uniquely VR experience is that the user is the primary agent of action and therefore free from any linear or predetermined script. This is both VR's strength and weakness in terms of offering an engaging, usable user experience.

In VR's innovative non-linear context, a user is forced to rely on a mental model of next steps and what they believe they can and should be able to do in this environment such as look around, move a certain distance, or pick up an object. A consumer who is totally new to VR and has no understanding of the current limits of the technology or an established mental model of steps is left to try to figure out completely on their own what they should do in the environment (their goal), what skills they will need to use, what tasks to complete, and how.

Explicit direction is necessary but rare

This is why clear and explicit direction is so critical in the initial moments of trying VR. Instead of free exploration, explicit direction is required very early so that users can take the first steps to learn the what is possible in the environment, know what to do, and feel confident enough to continue to explore and expand their skills and achievements on their own. Unfortunately, these critical and explicit directional prompts rarely occur. 

The tutorial in Oculus Rift + Touch teaches controller use & functions

The tutorial in Oculus Rift + Touch teaches controller use & functions

Meeting the adorable host character in First Contact, an introductory experience in Oculus Rift + Touch

Meeting the adorable host character in First Contact, an introductory experience in Oculus Rift + Touch

Oculus Touch has an impressive interactive 'first use experience' demo called First Contact, presented immediately after an initial controller tutorial, which lands the user in a densely packed graphical environment encouraging the user to look around at all the fantastic objects and details. The demo quickly introduces a host character who guides the user to start interacting with objects he hands them directly, and prompts user to complete small first tasks - all extremely helpful in getting engaged in the environment, completing small tasks and using the controls as quickly as possible. 

But while the demo achieves the goal of getting the user quickly immersed and builds a relationship with the character, the lack of ongoing explicit direction can sometimes be puzzling, leaving the users to guess exactly what the character wants you to do next - and how - making it easy to lose interest. For example, an interaction prompt may be located off screen or behind the user, with the character looking in the general direction but not telling the user about the prompt directly, as in this case in this screen shot. 

2D game play heuristics still apply

In Game Usability Heuristics (PLAY) For Evaluating and Designing Better Games: The Next Iteration, Desurvire and Wiberg established a list of comprehensive and helpful guidelines for developing enjoyable and usable game play. Their work is widely cited in Human-Computer Interaction (HCI) literature and is used throughout the games communities. I've noticed that at least four of their heuristics speak directly to what is currently lacking in much of the VR game play today, due to its novel user experience and rapidly evolving capabilities in control and immersion:

  • The player does not need to read the manual or documentation to play.
  • The player does not need to access the tutorial in order to play.
  • The first ten minutes of play and player actions are painfully obvious and should result in immediate and positive feedback for all types of players.
  • The game goals are clear. The game provides clear goals, presents overriding goals early as well as short term goals throughout game play.

Even more than with 2D gaming, interacting in VR platforms and games require a dead simple, explicit, direct, and scaffolded learning framework, so the user can first get oriented not just with the physical area ('playpen') or virtual hands - as in typical VR tutorial - but with the overall technology and total game play possibilities and activities. Only then can users quickly get engaged and stay engaged in the game - through understanding game goals, facing ongoing challenge, expanding their skills and abilities, and experiencing accomplishment and reward on their own.

Facilitators are making up for an approachable experience 

A VR lab facilitator helps a user adjust her head-mounted display

A VR lab facilitator helps a user adjust her head-mounted display

With any VR exhibit you'll see a facilitator, or equipment guide, on site to help users put on the gear comfortably and safely interact with the equipment assisting them to avoid punching walls or tripping over rig cords. But once the user has the equipment on and is in the game you'll notice the facilitator's role quickly expands to answering questions about game play, directing users to interactive areas or tasks in the game, and generally helping players to stay engaged and have a rewarding experience. This facilitation essentially makes up for the lack of explicit directions in-game, and the facilitator acts as stand-in for a consumer ready,  jump-in and play VR gaming experience.

 

3. Managing the physical body, space, and real-world objects while in VR

‘I don't feel comfortable sitting down.’

The immersion IS real

Creating a sense of immersion and 'presence' - the extent to which a user feels as though he or she is inside, or a part of, the virtual realm - continues to be a primary goal for VR engineers and designers. This is especially important when representing the user through an in-game avatar or human-like hands that aren't quite human (which can lead a user right out of a sense of presence and straight into unpleasant uncanny valley). Mitigating a sense of disembodiment balanced with increasing a user's feelings of actually 'being there' is a very current and juicy challenge for VR designers.

Fortunately, the VR industry can leverage a vast number of ways to observe, feel, and measure virtual presence as well as established heuristics for increasing a sense of in-game immersion, thanks to decades of history developing sims (simulation) games. And it's working. Some of the latest interactive VR experiences work hard to lend the user sense of visiting the realm as opposed to just seeing it as an observer, even if there remains the occasional sense of disembodiment from the control 'hands' or in-game avatar.

It is the design and development of this spectacular and singular, intense user focus that enables a realistic, embodied sense of presence in VR.  The more the user is engaged in the virtual world, the more they should be able to forget and leave the physical one behind. 

Yet while VR is designed to produce a sublime level of immersion, we must recognize that the real effect of this singular focus - physically blindfolding a user’s eyes from their surroundings - presents several unique challenges. Not the least of which is the potential to induce feelings of isolation, dependency, and fear, especially for women and people who may feel physically vulnerable, particularly in social and public situations.

 

Users are navigating two worlds, not one 

Sit on these comfy pillows in Oculus Home, and you'll fall down. They're not really there!

Sit on these comfy pillows in Oculus Home, and you'll fall down. They're not really there!

While the user's brain is working hard to experience a singular virtual world, in reality the user’s physical body including hands, limbs, and nervous system are still navigating the physical world. When physical objects and furniture appear in the virtual world with no clear indicator of which objects are interactive, tricks and illusions are plenty and can easily disorient users. Empirically and practically, the user is forced to reconcile two worlds at once. This results in a strange reverse culture shock where users are forced to manage the real physical world - the one users know how to navigate the best - blindly as they learn the virtual one.

In order to understand this split-reality experience, researchers from the field of Human-Computer Interaction (HCI) must evolve quickly. Lucy Suchman's foundational theory of plans derived from situated action has been a guidepost for how to understand and design for user intent. This framework of plans and situated actions seeks to avoid reducing user actions into simply behavior or mental, but instead investigates actions in situ.

But how can we leverage the in situ user framework when users' behaviors and mental activities are actually situated in two different realities?

The development of Augmented Reality - still a few years away - suggests a future where the two realities can be combined and navigated as one. But until then, VR is being sold and shipped to consumers at their local Best Buy and exhibited in local movie theaters. Fortunately lab engineers, researchers, and creatives are working hard on ways to reconcile this jarring and sometimes painful dichotomy.

In their 2015 research A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays, McGill et al. from the University of Glasgow identified usability issues surrounding navigating and manipulating physical objects while in VR in a survey of 108 VR users. As a result they proposed an 'engagement-dependent virtual reality' concept that allowed objects and people from the physical world to enter the virtual realm, on an as-needed basis by the user. A novel missing-link concept that speaks to user agency and reconciliation of the two worlds. 

Top: Minimal blending (reality around user’s hands). Middle: Partial blending (all interactive objects). Bottom: Full blending (all of reality). From A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays, McGill et al. 2015

Top: Minimal blending (reality around user’s hands). Middle: Partial blending (all interactive objects). Bottom: Full blending (all of reality). From A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays, McGill et al. 2015

 

4. Mitigating the effect of motion sickness (still!)

'Okay I'm starting to get sick.'

This is probably VR's best known issue when it comes to human factors. Virtual reality sickness is a special kind of motion sickness caused by a disparity between the user's sense and perception of movement and actual physical movement. For those who experience it (including the author of this post) it is still a very real obstacle to VR and will stop a susceptible user right in her tracks.

VR designers and engineers are working hard to try to make improvements to motion design in games in attempting to solve this very difficult and inherent problem. But what remains so important about this persistent issue is that it is a uniquely diversity-related issue; people who are  more susceptible to VR sickness includes women, children, people experiencing illness, and adults over 50.

Because VR promises a future enhancing one's ability to travel to new locations and experience new environments when they would otherwise be physically unable to do so, is imperative that the VR community quickly address this issue in order for consumers of all ages, abilities, and genders to benefit equally from this future.

In the meantime, if you are prone to motion sickness and still want to experience VR you can check out these homespun tips and tricks for how to combat VR nausea, including taking over-the-counter Dramamine.

None of these are easy problems to solve. But as commercial VR technology, development, and design accelerate it is critical that we keep at the top of mind the uniquely human and social impact of the technology, and make sure the devices are as consumer-ready as possible and accessible to everyone.

 

I tried Snap's Spectacles and here's what I learned

Upside: They look great and are tons of fun!

Upside: They look great and are tons of fun!

Downside: You still have to use your phone to take a selfie

Downside: You still have to use your phone to take a selfie

I recently got the chance to try out Snap's hot new Spectacles for a weekend - a wearable device that is essentially a camera resting on your face - and I had a blast! I also learned a ton about a promising future of lightweight wearables and Snap, Inc., the company. Here's what I learned.

 

1. Snap's Spectacles look like fashion, not tech

The second I put the Spectacles on and looked in the mirror I instantly felt a sense of surprise. I was wearing fashion not tech! The design and look of these glasses is impressive. Not only do they look great on but they feel very solid and maybe even a little bit cool, as though you have a little piece of sunny LA resting on your face.

The little yellow rings over each lens indicating where the cameras are located can look like a sporty style or branding detail, which is to be expected with most designer sunglasses these days and didn't come off to me as particularly shocking or remarkable. 

Once I put the Spectacles on I stopped thinking of them as a techie device and more as a pair of sunglasses with 'extra'. They look and feel exactly like sunglasses. I looked all over the packaging but didn't see it listed anywhere whether they include actual UVA protection, so it's not clear if you can replace your current sunglasses with the Spectacles, which would be ideal. And as with regular sunglasses, glasses wearers will have to use contacts to use the Spectacles. 

Overall, a good look. My initial impression was that Silicon Beach is finally showing Silicon Valley how to make consumer electronics people will want to wear.

 

2. They're quick and easy to use, but no selfies

Getting started with the Spectacles was quick and relatively painless. 

First you pair glasses to your Snapchat app by following the OOBE instructions listed in the booklet inside the charging case (which doubles as a hard glasses case - nifty). The instructions in my booklet weren't exactly inline with the app, but after poking around in the app I eventually found the Snapcode ghost to stare at with the glasses on in order pair the Spectacles.  

Next I saw the new user tutorial screens which showed me that there is only one interaction with the glasses - take a 10 second video Snap - and one hard button on the glasses you use to do this. This simplified interaction was a relief. The Spectacles were feeling more like a high-end toy than a high-tech gadget.

I instantly tried capturing my first Spectacles video Snap and noticed the 10-second blinking 'recording lights' from behind the glasses. Once the lights were done flashing I opened Snapchat to confirm that the video automatically uploaded to the app, and I was ready to go. 

Immediately I wondered how people would use the glasses to take selfies. Then I realized the only way to do this using the Spectacles would be to take the video Snap in front of a mirror, or have your friend wear the Spectacles and take your video Snap selfie. Naturally I tried both, and the camera angle is too wide for good selfies - with no ability to zoom.

 

3. They didn't appear to make anyone angry or uncomfortable (so far)

Next I encountered my first big challenge: To venture outside and walk around San Francisco with my Spectacles on.  

Given the history and reputation of previous Google Glass wearers (a.k.a 'Glassholes'), I was nervous at first to wear the Spectacles in public. Would people notice them, think I am potentially recording them, and feel violated? As a female and urban pedestrian the last thing want to do is put myself at risk by advertising something valuable or create unnecessary social tension!

But I was brave and went for it. Fortunately, I received only two comments all weekend, expressing general interest and asking me how the glasses work. Each time I made sure to share right away that I was not recording them (when Spectacles are recording they display a noticeable flashing light, but folks new to the glasses would not know that).

 

4. They allow you to enjoy the moment

TOTAL UNINTERRUPTED PRESENCE. This was the biggest revelation of all.

This is what a video Snap from made from Spectacles looks like. When you share it in Snapchat, the image takes rectangular shape, not circular.

Because you only tap the tactile button once on the side of the glasses take the video Snap, I never had to look down or away, find or read a menu, open my phone, swipe around or get otherwise distracted in order to capture a Snap. All I did was tap on my glasses while walking, the same level of interaction as if I was simply adjusting them.

After a day of using the Spectacles their real magic became became clear. They enabled me to capture a moment while never moving my eyes away from what I was looking at.

I could capture any moment while maintaining total presence and immersion in that moment and with full attention to the people around me.

The Spectacles product team consciously chose to not weight down the glasses experience with editing, stickering, annotating, and sharing of the Snaps - all of this you can easily do later when you want to get back on your phone app. Instead you can can just go about your life wearing the Spectacles and with one button tap capture lots of quick spontaneous video which will look a whole lot more real life - dare I say, reality - and not a bunch of posed and perfected images (unless of course you want to add that in later).

 

5. Snap Inc is suggesting A future In Augmented Reality 

As I used the glasses more and more, it became clear that Spectacles aren't just a lightweight way of capturing video, hidden in a form factor that is light-years more fashionable and accessible than we've seen before.

Suddenly, Snap, Inc. became a camera company - one that has its eyes on AR. The Spectacles are demonstrating how users can leverage the benefits of technology in a way that is safely, fully integrated to their reality and actually makes sense on their bodies and in a social context.

These glasses are just starting to break down the technology wall separating our physically lived lives from our documented, curated digital lives. Snap's Spectacles are clearly suggesting a future towards the intersection of good user experience and enhanced, even augmented, reality. So far, in a much cooler looking head-mounted display. 

 

This is NOT a sponsored post. These words reflect the researcher's own experiences and humble opinion.

Making the Case for Gendered Interactions

Which associations are you designing for?

It’s 2017. Today we know that gender is not simply a binary category (male or female), so easily and often confused with one’s biological sex.

We now know that gender is actually a much more fluid, sociologically-defined trait or identity that is based on a spectrum and is an entangled part our everyday lives, experiences, and expectations.

But did you know that objects and interactions can be gendered, too?

In fact, people who create products and experiences actually architect gender into their designs, most of the time without knowledge of doing so.

It's true. Due to our highly gendered society, pretty much everything that involves an action, interaction, script, or interface - that is, anything that exists to interact within our society and is created by people - can be seen as 'gendered', by design.

Fourteen years ago in my doctoral program I created a simple, subjective tool that enables people to assess the gender of objects and interactions.

It looked something like this:

At first glance this tool appears to be a standard subjective rating scale — and in a way, it is. But there are two key differences:

1) The subject selects the images that anchor each end of the scale (from a library of images depicting figures interacting with various everyday objects)

2) The subject indicates a point on the scale where they believe an interaction falls, effectively evaluating the gender of an interaction

This metric represents the transformative idea of subjectively yet quantitatively measuring elusive, social constructs like gender to understand exactly how much these qualities are integrated into the tools and products we create. And it can easily be applied to other demographics such as age, culture, and ethnicity.

As proud as I am of this tool, I am even prouder of the fact that I defended it when it was wholly unpopular to do so. My advisors and colleagues believed that the concept felt ‘forced’ and unnecessary, and told me to scrap it and start over. Back in 2003 the soft, squishy idea of gender seemed to play no role in the hard, boxy, tangible Palm Pilots we held in our hands. But today we know that when viewed in the context of design and use, our devices and tools are intrinsically linked to our lived experiences and all of its meaningful complexity.

We now understand that constructs like gender, age, and ethnicity are complex and meaningful influences in our lives. But the tools to measure this are antiquated. We know that for many people checking a single, binary box to a demographic question about race or gender is no longer applicable. And we know that age, just a number, is no indicator of openness to technological solutions that can enhance one’s quality of life.

This metric matters because as consumer technologies become more and more ubiquitous, we have no choice but to try using research tools that can help us better understand the complex richness of our users’ lived experiences. Richness that extends beyond our products but surely includes them — from the computers in our pockets, to the cars we drive, the clothes we buy, the entertainment we enjoy, and especially the conversational agents in our home attempting to create a single, personable thread through all of it. Where would Amazon’s Alexa, for example, fall for you on the gender spectrum?

Where would Amazon’s Alexa, for example, fall for you on the gender spectrum?
 

Rating scales are an established and reliable way to collect evaluative data about an experience. But this particular tool changes the conversation and establishes new understanding. It prompts a richer, more nuanced (i.e. user-centered) discussion that gives us actionable insight into the gendered connections and associations users make with their tools and how they are designed.

Perhaps most importantly, subjective rating scales gauging gender can tell us if we are failing to build experiences that consider and speak to all of our possible users and customers.