Merging Entertainment and Tech: How to Build Immersive Experiences Together

  Disney / Lenovo’s Star Wars Jedi Challenges AR device invites players to engage in lightsaber duels with life-size Sith lords (Disney)

Disney / Lenovo’s Star Wars Jedi Challenges AR device invites players to engage in lightsaber duels with life-size Sith lords (Disney)

Hollywood and Silicon Valley, Bound Together

Technology and entertainment have always needed each other.

Technology enables new platforms for content, creation, interaction with experiences, and distribution. Yet consumers continue to be drawn to new technology primarily by the content and experiences it enables. Content, as the saying goes, is king.

When it comes to entertainment, audiences’ primary emotional connection is to the imagined world, character, feeling, or memory associated with content - not the device, file, or site they used to experience it. Although when designed in an integrated and inspired way, the quality and ease use of a technology platform can serve to further enhance and strengthen emotional connection to a story.

On the other hand, the entertainment industry frequently falls behind advances in technology and is slow to adopt emerging platforms. We have seen this over time from last century’s development of motion picture sound, to peer-to-peer file sharing, to today’s increasing dominance of streaming media services-tuned-studios and user generated content.

With a few notable exceptions (such as video gaming, which I will address in a subsequent post) advancements in the tech industry develop largely separately from the world of entertainment production and the content it depends upon.

BUT WHY? TECHNOLOGY AND ENTERTAINMENT NEED EACH OTHER NOW MORE THAN EVER

Audiences have historically adopted the methods necessary to engage in their favorite entertainment experiences, such as going to a public theater or a movie theatre to catch a first run movie, recording a television program on a VCR or DVR device for later viewing, accessing a favorite show exclusively available via a Netflix subscription, contributing fanfic ideas or feedback on social media, or searching for a music video only available on YouTube. All it requires, of course, is a motivated interest in connecting with that story or experience, often times over and over again.

But currently Apple, Facebook’s Oculus, Amazon, Netflix, Sony, Samsung, and more are all in a race to deliver increasingly personalized content (media streaming services) and more compelling and immersive platforms (such as AR and VR) to increase engagement with users and attract new customers.

The battle for securing cutting-edge platforms/devices in consumer homes (with the development of smart speakers & displays, connected TVs, cameras, and head-mounted displays, just to name a few) is becoming more and more dependent on the content necessary to gain user interest and drive engagement. And content such as television, film, and music will need to be designed to leverage and showcase the new wave of devices’ tech-forward capabilities such as mixed reality, AI, 4k cameras, biometric sensors, facial recognition, and more.

 Clockwise, from top left: Netflix announces development of ‘choose your own adventure’ Black Mirror episode for 2019, @Lilmiquela is an AI Celebrity on Instagram, Spectator e-sports on the popular Twitch streaming platform, owned by Amazon.

Clockwise, from top left: Netflix announces development of ‘choose your own adventure’ Black Mirror episode for 2019, @Lilmiquela is an AI Celebrity on Instagram, Spectator e-sports on the popular Twitch streaming platform, owned by Amazon.

Thanks to the increasing impact of social media and data-curated streaming services, audiences have started to expect a more personal, direct interaction with their entertainment. Increasingly, people don’t just want to view a story and characters from a distance, they wish to experience it - even influence it - themselves. The evidence is rampant, from increased user-generated (or ‘prosumer’) content expanding story properties and marketing, the exploding popularity of cosplay and location-based entertainment such as immersive theater and escape rooms, continuing development and integration of virtual and augmented reality experiences where viewers are ‘part of’ a world or acting directly with a character, and more. Audiences are enthusiastically responding to new experiences which offer more personal interaction and a deeper connection to stories and characters, which of course data and technology can and have enabled on a large scale.

THE FUTURE IS IMMERSIVE, PERSONAL INTERACTIONS

 National Theater in London Offers Glasses With Live Subtitles (Courtesy New York Times)

National Theater in London Offers Glasses With Live Subtitles (Courtesy New York Times)

 Oculus Touch controllers precisely recreate users’ hands and gestures in VR media (Oculus Quest)

Oculus Touch controllers precisely recreate users’ hands and gestures in VR media (Oculus Quest)

Meanwhile, the entertainment and tech industries still struggle to develop products together with shared goals, motivations, and timelines. We’re still a long way off from creating truly seamless, cohesive technology experiences which prioritize and enhance content and emotional connection over layers of paywalls and complexity.

A few inspiring examples of product experiences demonstrating seamless integration of technology with the magic of story and character include:

  • AR head mounted displays for theater audiences which integrate subtitles in various languages (without obscuring the show itself)

  • Disney’s Star Wars Jedi Challenges AR kit for the home which enables a first-person lightsaber dual with a Sith lord

  • Oculus Touch controllers with haptic feedback recreating human hands and gestures within VR worlds - enabling users to feel the experience of petting a character directly

  • Amazon Echo smart speaker acting as gateway through which audiences can quickly and easily access a single story from over 400,000 audiobooks

These product experiences integrate technology and content in such a way that the devices act as the means, not the ends, to innovative entertainment while playing a critical role in access and immersion.

So how do we work better together to make truly compelling experiences for audiences that integrate the power of new technologies and interactions?

Developing and shipping meaningful entertainment experiences like those listed above will require a tighter than ever connection among entertainment and technology industries and practices. We need to ensure more fluent sharing of knowledge, processes expertise, and tools so that we can collaborate from the ground up and deliver to audiences (and users) the personalized, realistic, and overall seamless and delightful experiences they crave.


RECOGNIZING SIMILARITIES AND DIFFERENCES

Entertainment and tech inarguably share a goal to deliver audiences and (users) meaningful, compelling experiences that endure and build connection to brand.

And from one point of view, production practices across entertainment and tech can be viewed as quite similar, if given different titles:

Tech /Digital Production

Investor

Product Managers

Development & Content Authors

User Experience Design

Visual UI Design, Industrial Design

Cross-Platform Development

Elevator-pitch ideas to illustrate impact i.e “Uber for air transportation”

Interaction Design

User Experience Testing & Research

Data Analytics & Qualitative Research

Entertainment

Executive Producer

Show-runners / Producers

Storytelling & Writing

Direction

Artists & Art Department

360 Marketing and Brand Strategy

High concept pitches to predict audience appeal. i.e. “Jaws in space” (Alien)

Storyboarding

Pitching & Feedback

Box Office Numbers and Sales

…and more

 Pixar Story Artist Valerie Lapoint pitches story ideas for feedback and notes (Kahn Academy / Pixar in a Box)

Pixar Story Artist Valerie Lapoint pitches story ideas for feedback and notes (Kahn Academy / Pixar in a Box)

 The author user testing the Microsoft Surface mixed reality device to determine its intuitiveness and usability (Microsoft)

The author user testing the Microsoft Surface mixed reality device to determine its intuitiveness and usability (Microsoft)

However a few key differences should also be considered which heavily impact production goals and approaches.

  1. Overall tolerance for risk. Hollywood is the undisputed master of storytelling and emotional impact. Yet film, television, and music production yields such thin profit margins that Hollywood is equally notorious for strategies of risk avoidance and investment in established, predictable properties and experiences. Conversely, one can look at tech’s foundation of venture capital wealth for unbounded optimism and risk taking in technological innovation, demonstrated through the creation of computer animation & digital effects tools, VR, and more. Few tech ventures succeed, but when they do they are often widely impactful to users.

  2. Anticipated length of engagement. In entertainment, audiences fully submitting submit themselves to an experience for a limited time (such as 2 hours). This context allows creators to deliver a pre-scripted user experience that audiences abide by and can provide full attention to. In tech, the intent of building consumer technology platforms is for the user to be able to use, adopt, and live with the technology on an ongoing basis. This requires product experiences to be designed, developed, and tested to be primarily intuitive and easy to use - if they are to be successfully adopted. Technology teams constantly test with users, their version of audiences, to make sure the product is centered around their needs and expectations, not the developers’.

  3. The introduction of access and usability obstacles. Within the nature of developing and releasing new, innovative technology is the introduction of entirely new devices, uses, functions, and interactions. Unlike a typical song format or story spine, the goal is actually to break out of predictable formula. New technology requires that users willingly try something entirely new, learn and understand it, successfully interact with it, and adopt it. Think for example of the first time a user encounters a virtual reality head-mounted display or new game controllers; users are frequently not entirely successful with the experiences at first use, and the products are often unintuitive to use, requiring considerable guidance. And when we consider a wide range of audience or users with varied experience with technologies, the visible usability problems become more acute. The integration of approachability, usability, and easy access for all remain major challenges for technology development (and the tech industry as a whole) and can critically hinder access to entrainment content.

How to Move Forward Together

In practice, we’ll need to

  • Value - not fear - each other. Tech is not the enemy of creative! When done well, it can and should act primarily in service of great story and emotional impact. Tech on the other hand absolutely must prioritize creative input if it is to be compelling and successful, baking in decision-making authority and development time dedicated to creative inspiration, exploration, and iteration.

  • Share concepts, pitches, ideas, and explorations more frequently - especially in areas where tech and creative ideas converge - so that we can drive experiences that leverage each others’ developments.

 Testing sketched content with an early cardboard prototype of a hardware device. This practice enables cheap, rapid iteration to ensure an intuitive ‘total experience’ across content and tech design. ( Stanford )

Testing sketched content with an early cardboard prototype of a hardware device. This practice enables cheap, rapid iteration to ensure an intuitive ‘total experience’ across content and tech design. (Stanford)

  • Develop and test concepts together. Combine early content ideas and technology prototypes to test the overall experience earlier and more frequently with audiences and users. For example, artist sketches and storyboards can be integrated into paper prototype testing and ‘Wizard of Oz’ practices (common in tech) to ensure that the combination of content and technology is intuitive, fluid, and delightful.

  • Trust each other's expertise. Technologists would benefit from giving artists and storytellers’ more of a platform to contribute early and often when developing interactive experiences, such as AR and VR. And artists must find ways to work more closely in partnership with technology design teams to test and iterate stories and characters quickly, so they can match the speed of Internet-based feedback and ensure a story interaction makes sense within the limitations of the technology.

  • Supplement each other’s risk tolerance. Tech should invest in more creative story talent that can help stretch the boundaries and impact of immersive experiences. Storytellers can imagine their world in the context of various platforms and levels of viewer interaction and influence.

  • Tech in particular will need to step up the human-centered design processes and work harder to remove barriers of usability, lack of familiarity, and user discomfort that still plagues most advanced technology products. This includes addressing critical barriers of trust with use of user data, and including inclusive design processes and accessibility. More than ever, storytellers will depend on platforms which enable them to reach as many audiences as possible.

  • Consider Audiences, Viewers, Consumers, Players, and Users as the same person at various moments of engagement. Once we adopt a shared perspective to consider all of these roles as various points of a single journey throughout a product experience, we have made major headway into carting a map for a more collaborative development process across entertainment and tech.

Look to the Legacy of Video Games For Tools

We must carve a path together sharing expertise, process, and techniques, and ideally build new tools together to create truly engaging, cohesive experiences.

Fortunately, one example we can look to is the development of video games over the past 50 years, which can show us some valuable clues how to achieve a productive synergy. In my next blog post, I’ll outline tools that the video game industry has developed to create cohesive and compelling experience across advances in content and technology.

 

Virtual Reality's Persistent Human Factors Challenges

 Getting into the action of 'Super Hot' on the Oculus Rift + Touch

Getting into the action of 'Super Hot' on the Oculus Rift + Touch

Virtual Reality is arguably the hottest and fastest growing category in consumer electronics and entertainment today. Since the 1990’s the platform has promised an unprecedented opportunity to experience environments and ideas outside of our physical limitations. Technology, investment, and computing power are just now starting to catch up to deliver on this promise and recent offerings in devices, games, and cinema are becoming more widely available to the public - not to mention thrilling to experience (I especially like First Contact, Super Hot, and Henry the Hedgehog on Oculus Rift).

Yet the industry is still working to deliver a fully consumer-ready VR experience. VR continues to face multiple, sizable usability and human factors challenges in order to make it accessible to and enjoyable by mass audiences. Below are a few of its biggest and most persistent challenges.

 

1. An approachable looking device

‘It just looks so tech-y and scary.....'

Let's face it. The state of the art VR rigs with their big, black, wiry head-mounted displays tethered to large, powerful gaming computers and multiple room sensors can look a little scary. The system covers the face with a dark and blinding mask and the wires tethered to the head and hands can evokes images of an EEG machine by way of basement-gamer captivity. Not exactly an unappealing image for the average consumer.

Current HMD design suggestS submission, not control

From a human factors perspective, the affordances of the typical head-mounted display (HMD) - a dark, blinding mask covering the eyes - suggest submission as opposed to control of the body and senses, the inverse of the established heuristic in human-centered computing interfaces. Users are supposed to identify a sense of comfort and empowerment in their devices - not disablement and dependency.

 A variety of head mounted display designs  From top left: ViewMaster (1963), Oculus Rift (2016), Google Daydream (2016), Snap Spectacles (2016)

A variety of head mounted display designs

From top left: ViewMaster (1963), Oculus Rift (2016), Google Daydream (2016), Snap Spectacles (2016)

The industry is certainly working on this obstacle, but we're not quite there yet. The Google Daydream (2016) has offered arguably the most consumer-friendly HMD design to date. With its soft colors, t-shirt materials, and rounded edges, it's the right step towards on-the-surface approachability while we wait for the sensor technology and processing power to get slimmer. In the meantime, the current HMD industrial designs by Oculus, HTC, and Sony still suggest an experience that is dark, closeted, isolating, and unknown.

Inspiration from novelty IDs

Looking to history, the classic ViewMaster from the 1960s suggests an alternative design by showing users a preview of the experience by way of the media disc peeking out, allowing users some understanding of the experience they will submit to. Towards the augmented / mixed reality end of the spectrum, Snap's Spectacles cleverly integrated its camera tech into a form factor that is at once comfortable, familiar, and flexible for users to wear. This design feels more like the future of HMDs that consumers will be ready for and feel comfortable trying out on their own.

 

2. Helping users know what to do in VR

‘Okay, what do I do now?’

When observing VR use, you will likely notice that users frequently need assistance with how to get started in the world, how to use the controllers and/or virtual hands, discovering menus and options, and what to do to keep engaged in a VR game experience.

A lack of script or mental model

One of the hallmarks of a uniquely VR experience is that the user is the primary agent of action and therefore free from any linear or predetermined script. This is both VR's strength and weakness in terms of offering an engaging, usable user experience.

In VR's innovative non-linear context, a user is forced to rely on a mental model of next steps and what they believe they can and should be able to do in this environment such as look around, move a certain distance, or pick up an object. A consumer who is totally new to VR and has no understanding of the current limits of the technology or an established mental model of steps is left to try to figure out completely on their own what they should do in the environment (their goal), what skills they will need to use, what tasks to complete, and how.

Explicit direction is necessary but rare

This is why clear and explicit direction is so critical in the initial moments of trying VR. Instead of free exploration, explicit direction is required very early so that users can take the first steps to learn the what is possible in the environment, know what to do, and feel confident enough to continue to explore and expand their skills and achievements on their own. Unfortunately, these critical and explicit directional prompts rarely occur. 

 The tutorial in Oculus Rift + Touch teaches controller use & functions

The tutorial in Oculus Rift + Touch teaches controller use & functions

 Meeting the adorable host character in First Contact, an introductory experience in Oculus Rift + Touch

Meeting the adorable host character in First Contact, an introductory experience in Oculus Rift + Touch

Oculus Touch has an impressive interactive 'first use experience' demo called First Contact, presented immediately after an initial controller tutorial, which lands the user in a densely packed graphical environment encouraging the user to look around at all the fantastic objects and details. The demo quickly introduces a host character who guides the user to start interacting with objects he hands them directly, and prompts user to complete small first tasks - all extremely helpful in getting engaged in the environment, completing small tasks and using the controls as quickly as possible. 

But while the demo achieves the goal of getting the user quickly immersed and builds a relationship with the host character, the lack of ongoing explicit direction can sometimes be puzzling, leaving the users to guess exactly what the character wants you to do next - and how - making it easy to lose interest. For example, an interaction prompt may be located off screen or behind the user, with the character looking in the general direction but not telling the user about the prompt directly, as in this case in this screen shot. 

2D game play heuristics still apply

In Game Usability Heuristics (PLAY) For Evaluating and Designing Better Games: The Next Iteration, Desurvire and Wiberg established a list of comprehensive and helpful guidelines for developing enjoyable and usable game play. Their work is widely cited in Human-Computer Interaction (HCI) literature and is used throughout the games communities. I've noticed that at least four of their heuristics speak directly to what is currently lacking in much of the VR game play today, due to its novel user experience and rapidly evolving capabilities in control and immersion:

  • The player does not need to read the manual or documentation to play.

  • The player does not need to access the tutorial in order to play.

  • The first ten minutes of play and player actions are painfully obvious and should result in immediate and positive feedback for all types of players.

  • The game goals are clear. The game provides clear goals, presents overriding goals early as well as short term goals throughout game play.

Even more than with 2D gaming, interacting in VR platforms and games require a dead simple, explicit, direct, and scaffolded learning framework, so the user can first get oriented not just with the physical area ('playpen') or virtual hands - as in typical VR tutorial - but with the overall technology and total game play possibilities and activities. Only then can users quickly get engaged and stay engaged in the game - through understanding game goals, facing ongoing challenge, expanding their skills and abilities, and experiencing accomplishment and reward on their own.

Facilitators are making up for an approachable experience 

 A VR lab facilitator helps a user adjust her head-mounted display

A VR lab facilitator helps a user adjust her head-mounted display

With any VR exhibit you'll see a facilitator, or equipment guide, on site to help users put on the gear comfortably and safely interact with the equipment assisting them to avoid punching walls or tripping over rig cords. But once the user has the equipment on and is in the game you'll notice the facilitator's role quickly expands to answering questions about game play, directing users to interactive areas or tasks in the game, and generally helping players to stay engaged and have a rewarding experience. This facilitation essentially makes up for the lack of explicit directions in-game, and the facilitator acts as stand-in for a consumer ready,  jump-in and play VR gaming experience.

 

3. Managing the physical body, space, and real-world objects while in VR

‘I don't feel comfortable sitting down.’

The immersion IS real

Creating a sense of immersion and 'presence' - the extent to which a user feels as though he or she is inside, or a part of, the virtual realm - continues to be a primary goal for VR engineers and designers. This is especially important when representing the user through an in-game avatar or human-like hands that aren't quite human (which can lead a user right out of a sense of presence and straight into unpleasant uncanny valley). Mitigating a sense of disembodiment balanced with increasing a user's feelings of actually 'being there' is a very current and juicy challenge for VR designers.

Fortunately, the VR industry can leverage a vast number of ways to observe, feel, and measure virtual presence as well as established heuristics for increasing a sense of in-game immersion, thanks to decades of history developing sims (simulation) games. And it's working. Some of the latest interactive VR experiences work hard to lend the user sense of visiting the realm as opposed to just seeing it as an observer, even if there remains the occasional sense of disembodiment from the control 'hands' or in-game avatar.

It is the design and development of this spectacular and singular, intense user focus that enables a realistic, embodied sense of presence in VR.  The more the user is engaged in the virtual world, the more they should be able to forget and leave the physical one behind. 

Yet while VR is designed to produce a sublime level of immersion, we must recognize that the real effect of this singular focus - physically blindfolding a user’s eyes from their surroundings - presents several unique challenges. Not the least of which is the potential to induce feelings of isolation, dependency, and fear, especially for women and people who may feel physically vulnerable, particularly in social and public situations.

 

Users are navigating two worlds, not one 

 Sit on these comfy pillows in Oculus Home, and you'll fall down. They're not really there!

Sit on these comfy pillows in Oculus Home, and you'll fall down. They're not really there!

While the user's brain is working hard to experience a singular virtual world, in reality the user’s physical body including hands, limbs, and nervous system are still navigating the physical world. When physical objects and furniture appear in the virtual world with no clear indicator of which objects are interactive, tricks and illusions are plenty and can easily disorient users. Empirically and practically, the user is forced to reconcile two worlds at once. This results in a strange reverse culture shock where users are forced to manage the real physical world - the one users know how to navigate the best - blindly as they learn the virtual one.

In order to understand this split-reality experience, researchers from the field of Human-Computer Interaction (HCI) must evolve quickly. Lucy Suchman's foundational theory of plans derived from situated action has been a guidepost for how to understand and design for user intent. This framework of plans and situated actions seeks to avoid reducing user actions into simply behavior or mental, but instead investigates actions in situ.

But how can we leverage the in situ user framework when users' behaviors and mental activities are actually situated in two different realities?

The development of Augmented Reality - currently accessible in the form of hand held devices with consumer- ready HMDs still a few years away - suggests a future where the two realities can be combined and navigated as one. But until then, VR is being sold and shipped to consumers at their local Best Buy and exhibited in local movie theaters. Fortunately lab engineers, researchers, and creatives are working hard on ways to reconcile this jarring and sometimes painful dichotomy.

In their 2015 research A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays, McGill et al. from the University of Glasgow suggest a Mixed Reality solution to usability issues surrounding navigating and manipulating physical objects while in VR in a survey of 108 VR users. Their study proposes an 'engagement-dependent virtual reality' concept that allowed objects and people from the physical world to enter the virtual realm, on an as-needed basis by the user. A novel missing-link concept that speaks to user agency and reconciliation of the two worlds. 

 Top: Minimal blending (reality around user’s hands). Middle: Partial blending (all interactive objects). Bottom: Full blending (all of reality). From  A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays , McGill et al. 2015

Top: Minimal blending (reality around user’s hands). Middle: Partial blending (all interactive objects). Bottom: Full blending (all of reality). From A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays, McGill et al. 2015

 

4. Mitigating the effect of motion sickness (still!)

'Okay I'm starting to get sick.'

This is probably VR's best known issue when it comes to human factors. Virtual reality sickness is a special kind of motion sickness caused by a disparity between the user's sense and perception of movement and actual physical movement. For those who experience it (including the author of this post) it is still a very real obstacle to VR and will stop a susceptible user right in her tracks.

VR designers and engineers are using perceptual science to make improvements to motion design in games in attempting to solve this very difficult and inherent problem.

Because VR promises a future enhancing one's ability to travel to new locations and experience new environments when they would otherwise be physically unable to do so, is imperative that the VR community quickly address this issue in order for consumers of all ages, abilities, and genders to benefit equally from this future.

In the meantime, if you are prone to motion sickness and still want to experience VR you can check out these homespun tips and tricks for how to combat VR nausea, including taking over-the-counter Dramamine.

None of these are easy problems to solve. But as commercial VR technology, development, and design accelerate it is critical that we keep at the top of mind the uniquely human and social impact of the technology, and make sure the devices are as consumer-ready as possible and accessible to everyone.

 

I tried Snap's Spectacles and here's what I learned

 Upside: They look great and are tons of fun!

Upside: They look great and are tons of fun!

 Downside: You still have to use your phone to take a selfie

Downside: You still have to use your phone to take a selfie

I recently got the chance to try out Snap's hot new Spectacles for a weekend - a wearable device that is essentially a camera resting on your face - and I had a blast! I also learned a ton about a promising future of lightweight wearables and Snap, Inc., the company. Here's what I learned.

 

1. Snap's Spectacles look like fashion, not tech

The second I put the Spectacles on and looked in the mirror I instantly felt a sense of surprise. I was wearing fashion not tech! The design and look of these glasses is impressive. Not only do they look great on but they feel very solid and maybe even a little bit cool, as though you have a little piece of sunny LA resting on your face.

The little yellow rings over each lens indicating where the cameras are located can look like a sporty style or branding detail, which is to be expected with most designer sunglasses these days and didn't come off to me as particularly shocking or remarkable. 

Once I put the Spectacles on I stopped thinking of them as a techie device and more as a pair of sunglasses with 'extra'. They look and feel exactly like sunglasses. I looked all over the packaging but didn't see it listed anywhere whether they include actual UVA protection, so it's not clear if you can replace your current sunglasses with the Spectacles, which would be ideal. And as with regular sunglasses, glasses wearers will have to use contacts to use the Spectacles. 

Overall, a good look. My initial impression was that Silicon Beach is finally showing Silicon Valley how to make consumer electronics people will want to wear.

 

2. They're quick and easy to use, but no selfies

Getting started with the Spectacles was quick and relatively painless. 

First you pair glasses to your Snapchat app by following the OOBE instructions listed in the booklet inside the charging case (which doubles as a hard glasses case - nifty). The instructions in my booklet weren't exactly inline with the app, but after poking around in the app I eventually found the Snapcode ghost to stare at with the glasses on in order pair the Spectacles.  

Next I saw the new user tutorial screens which showed me that there is only one interaction with the glasses - take a 10 second video Snap - and one hard button on the glasses you use to do this. This simplified interaction was a relief. The Spectacles were feeling more like a high-end toy than a high-tech gadget.

I instantly tried capturing my first Spectacles video Snap and noticed the 10-second blinking 'recording lights' from behind the glasses. Once the lights were done flashing I opened Snapchat to confirm that the video automatically uploaded to the app, and I was ready to go. 

Immediately I wondered how people would use the glasses to take selfies. Then I realized the only way to do this using the Spectacles would be to take the video Snap in front of a mirror, or have your friend wear the Spectacles and take your video Snap selfie. Naturally I tried both, and the camera angle is too wide for good selfies - with no ability to zoom.

 

3. They didn't appear to make anyone angry or uncomfortable (so far)

Next I encountered my first big challenge: To venture outside and walk around San Francisco with my Spectacles on.  

Given the history and reputation of previous Google Glass wearers (a.k.a 'Glassholes'), I was nervous at first to wear the Spectacles in public. Would people notice them, think I am potentially recording them, and feel violated? As a female and urban pedestrian the last thing want to do is put myself at risk by advertising something valuable or create unnecessary social tension!

But I was brave and went for it. Fortunately, I received only two comments all weekend, expressing general interest and asking me how the glasses work. Each time I made sure to share right away that I was not recording them (when Spectacles are recording they display a noticeable flashing light, but folks new to the glasses would not know that).

 

4. They allow you to enjoy the moment

TOTAL UNINTERRUPTED PRESENCE. This was the biggest revelation of all.

This is what a video Snap from made from Spectacles looks like. When you share it in Snapchat, the image takes rectangular shape, not circular.

Because you only tap the tactile button once on the side of the glasses take the video Snap, I never had to look down or away, find or read a menu, open my phone, swipe around or get otherwise distracted in order to capture a Snap. All I did was tap on my glasses while walking, the same level of interaction as if I was simply adjusting them.

After a day of using the Spectacles their real magic became became clear. They enabled me to capture a moment while never moving my eyes away from what I was looking at.

I could capture any moment while maintaining total presence and immersion in that moment and with full attention to the people around me.

The Spectacles product team consciously chose to not weight down the glasses experience with editing, stickering, annotating, and sharing of the Snaps - all of this you can easily do later when you want to get back on your phone app. Instead you can can just go about your life wearing the Spectacles and with one button tap capture lots of quick spontaneous video which will look a whole lot more real life - dare I say, reality - and not a bunch of posed and perfected images (unless of course you want to add that in later).

 

5. Snap Inc is suggesting A future In Augmented Reality 

As I used the glasses more and more, it became clear that Spectacles aren't just a lightweight way of capturing video, hidden in a form factor that is light-years more fashionable and accessible than we've seen before.

Suddenly, Snap, Inc. became a camera company - one that has its eyes on AR. The Spectacles are demonstrating how users can leverage the benefits of technology in a way that is safely, fully integrated to their reality and actually makes sense on their bodies and in a social context.

These glasses are just starting to break down the technology wall separating our physically lived lives from our documented, curated digital lives. Snap's Spectacles are clearly suggesting a future towards the intersection of good user experience and enhanced, even augmented, reality. So far, in a much cooler looking head-mounted display. 

 

This is NOT a sponsored post. These words reflect the researcher's own experiences and humble opinion.