Virtual Reality

UI/UX, Systems

This is a selection of experiments into virtual reality exploring how design may function in a world without screens.

Questions

  • What are the limitations of current UI practices when they are translated into VR?
  • What is a good UI in VR?
  • How can we create the feeling of presence across different VR systems?

Outcome

A Google Cardboard VR experience that is a slideshow of various experiments that show the limitations of current VR technologies.
What does it look like to translate current UI practices into VR?

FuckVR

Summary

Group project with Atty Eleti, Cecilia Boden, and Nate Parrot. Personal role was in ideation, asset creation, and presentation.

See the demo
Visit the website

We recommend watching our demo through a Google Cardboard as much as possible. Please note that we do not recommend viewing our demo if you are sensitive to visual stimuli.

What is a good user interface in virtual reality?

First we had to define what virtual reality is good at:

However these things also had their drawbacks:

Design Trends

What are current design trends that we could see being adopted into virtual reality? What kinds of experiences would these trends create?

Virtual reality as it exists now is in the stone ages in terms of interaction design. Because it's so new, designers haven't really figured out the most optimum way of designing them. Even more importantly, since the general public do not have access to tools that may quickly and easily make a virtual reality experience, we don't yet have insight into how the medium might transform when it becomes ubiquitous—similar to how people could never have predicted internet culture when the internet was first created.

To get at this, we looked at common UX patterns or research methods that we see in screen-based interfaces. We then tried to translate these into a VR environment:




Intrusive advertisements. To close it, look at the cross.
In the field of UX, there has been a trend of tracking eye movements—they tell you when a person is paying attention to a certain part of the design, often signaling your user’s intent and concentration. Why not create an object that requires you to physically look at the interactive aspect in order to manipulate it?

The future of social media newsfeeds?
What does the neverending stream of statuses look like in virtual reality? Now that you aren't restricted by a screen, you can have more space to view all of these status updates.

Putting things in space, just because you can
With virtual reality, we no longer are restricted by the edges of the screen. Why not play with space and have multiple screens for media that work best on a tabular surface?

"Natural" or "intuitive" gestures
We also looked at common body language. Something that we learn from a very young age is that nodding your head means ‘yes’, while shaking your head from side to side means ‘no’. Why can’t we take advantage of this common movement and make it an intuitive command for virtual reality?

Content right where you want it—right where you're looking
Technology is often marked by the impractical and the humor-inducing interactions. The prevalence of companies like giphy serving to advance the lexicon of the internet by making gifs easier to search lead us to question the role of virtual reality in the creation of a new kind online culture. How could this physically surround the user?

Visually disorienting—a little like weird viral sites
With the prevalance of viral sites like Staggering Beauty, what kind of weird adventures could you have in VR?
How can you quickly build an accessible VR experience?

WebVR

Identity

A blue male figure and a pink female figure rotating and merging into each other. Screens appear against a blue background and faces come flying into view.

My first VR scene for the cardboard, which allowed me to learn A-Frame—a new Javascript library that allows you to make VR scenes for the web. I coded and created everything, apart from the 3D models.

The content was pieced together from pre-made 3D forms and footage I already had, but I played with texture, overlapping shapes, and movement in order to create a cohesive scene.

This scene contains: View Experience
This experience is best viewed through a Google Cardboard. Press the icon on the lower right hand side to change views.

Input Methods

Because a Google cardboard lacks controllers, a problem that often comes up is how do people direct their experience? Here are a couple of explorations of possible ways of creating input:

Timer

A timer can be used to select an item after it has been 'hovered' over for a set period of time. This method increases pacing—a timer will be attached to pretty much everything, and it discourages users from focusing in on details in case they accidentally select something they don't intend to.

Pros:
  • It's intuitive because the timer immediately starts simply by the users doing what they feel natural doing—looking around the space.
Cons:
  • Easy to select something unintended
  • A system will need to be created in order to cue the viewer into the fact that items can be interacted with
Use cases:
  • A scene where you want to encourage the user to look through things quickly—a mystery story, a horror story
  • Where impulse decisions lead to fun outcomes

Gestures

A specific gesture may be required in order to select something in the scene.

Pros:
  • Depending on the specific movement, it's hard to select something you don't intend to
Cons:
  • Excessive movement of the head can be painful after a while
  • Ease of learning is reduced—it's a very unnatural system for a user to learn and they will have to be onboarded pretty heavily in order for it to be useful
Use cases:
  • Where you want the user to be very intentional and focused
  • Sparingly, in order to prevent injuries

Natural Movement

Movement can be used as a way to naturally progress through the story or to be interactive. A slight tilt of the head can change the entire scene.

Pros:
  • More intuitive for the user—easy to learn
  • Very efficient and quick
Cons:
  • Easy to select things unintentionally
  • More attention will need to be focused on designing these scenes to make sure the interactions are actually feasible for a user to do
  • You cannot look at something without interacting with it if it is selectable
Use cases:
  • A scene that never stops moving—a moving vehicle, a painting scene
  • A scene where the user looks around a lot—a guided tour of a space, an exploration into a vast natural scene

Findings:

  • There is an inverse relationship between inputs that are easy to select and inputs that result in fewer mistakes
  • Each of these input methods may be appropriate depending on the situation
  • Input methods dictate feelings that the user has about the pace of the experience
What are the possibilities of higher end VR systems?

Presence in an Oculus

Presence

A gif of a sea of glass faces that move away from the viewer as the viewer approaches.

A VR scene in Unity, made with SteamVR and VRTK for the Oculus. It explores the idea of presence in VR—how a virtual world can make you feel more present by reacting to you. The scene is composed of 3D scans—either from 123DCatch or through using the Artec Spider™ 3D Scanner. I worked in Unity, alongside a developer, in order to create this scene.

This project contains:


An image of me wearing an Oculus headset One thing that was a problem was the accessibility of the technology—I had significant issues finding time to use an Oculus or HTC Vive. I finally got access to an Oculus for an extended period of time while hacking at Hack@Brown 2017.
An image of 2 merged skulls created by manipulating 3D scans I enjoyed the idea of 3D scanning as a quick way of creating prototypes without having to spend extended periods of time 3D modelling. However, I found that I actually enjoyed the quirks of 3D scanning—the mistakes, extra scanned materials, etc. created happy mistakes that were visually exciting.
What are the possibilities in lower end VR systems?

Presence in Cardboard

Painting

A gif of a grassy canvas against a dirt background. A remote is moving in front of it and pineapples explode from it when pressed.

This project was created as an exploration into technology that allows us to have more immersive VR experiences using technology that is accessible to the masses. You need two smart phones and a google cardboard—a much lower cost than getting a headset and a computer simply for VR. This project uses the accelerometer of your phone to determine the location and tilt of the remote on the screen.

Uses Dayframe by Ryan Betts, which uses web sockets to use another phone as a controller. Pineapple model made by Chloe Karayiannis.

Dayframe II (WIP)

Using the technology from the Dayframe project, a friend and I are currently working on a project that aims to create more presence within the Cardboard environment. A (probably not) up to date version of the project can be found on Github.

Current explorations: