Back

Wonderscope

Wonderscope is an iOS app for kids that uses Augmented Reality to transform ordinary spaces into extraordinary stories.

I heavily participated in work surrounding experience design and visual design, taking charge of the 2D menu systems and participating in design workshops to determine the best onboarding experiences for our AR components. I am currently in charge of all UX and visual design for new features that will be launched on Wonderscope in the future.


Key Interactions

Wonderscope is a storytelling app. It makes kids read out loud and explore their environment through interacting with characters in augmented reality stories.

Reading Out Loud

The main interaction is that kids need to read lines out loud in order to continue the story they are experiencing. These lines are written in order to place the kid within the story as a character.

Exploring the Environment

Some interactions in the stories require the kids to look around their environment in order to continue the story. This interactions are often find-the-object interactions, where objects to find are scattered around their environment and they have to find them and tap on them in order to proceed.


Onboarding Experience

Wonderscope is a storytelling app. It makes kids read out loud and explore their environment through interacting with characters in augmented reality stories.

Story as Driver

Contrary to many onboarding flows, we expanded rather than limited the number of steps in the onboarding flow, turning it into a narrative sequence.

This was due to a few reasons:

  • Our users are young kids who often have to ask permission from their parents before accepting any permissions requests. Since permissions are essential to using the app, we needed to empower them with the knowledge they needed to explain to their parents the importance of giving us permissions.
  • A narrative experience makes the process enjoyable for kids, as opposed to another tedious step they need to do in order to access the content.
  • A narrative experience allowed us to provide a character that could be a guiding figure to the kid. This guiding figure is important because it allowed our instructions, both in onboarding and in later AR set up, to feel more friendly rather than authoritative. We named him Blob.

Response and Results

The main interaction is that kids need to read lines out loud in order to continue the story they are experiencing. These lines are written in order to place the kid within the story as a character.


AR Set Up

This is the process of getting AR set up within the space. It involves getting the users to scan their environment so we can detect feature points around them in order to place content within their space.

Follow the Storycopter

The first time they ever try to set up AR, they will be prompted to follow a hovercraft that makes them wave their phone in a specific motion at a specific speed to map their environment. It also serves to show that they can explore the space around them, as opposed to just staying still while viewing the stories.

Finding a Plane and Placing the Content

At this point, the user is told to find a flat surface to put their story on. This is very forgiving—users can easily find planes that may not be the ideal size or shape—but we found that users did not mind visual bugs (e.g. weird clipping issues) as much as they minded not being able to find the surface to get the story started.

Once a plane is found, usually by the user moving the device around in a similar manner to following the storycopter, the user taps on the surface. A portal opens on the plane as the Storycopter comes in to land. The Storycopter flies into the portal, the portal closes behind it, and the story starts.

Error States

These are the problems that can arise which will prevent a user from finding a plane to place the content on:

  • The user is moving the phone too slowly or too quickly for the camera to scan the room. This is fixed by having an animation of Blob move the phone side to side at an appropriate speed that the user can match.
  • There are not enough feature points to figure out where the plane is. This is usually due to a lack of flat surfaces or a lack of pattern on the surface itself (e.g. shiny, single-colored surfaces)
  • There isn't enough light. This reduces the contrast in the scene, making it harder for the camera to pick up feature points.

Possible Improvements

We've found that AR set up causes the largest drop off in users—many don't make it past the follow the storycopter step. Even if users passed the storycopter stage, we saw in user testing that many kids still needed their parent's help finding a plane. Because of this, we came up with a few solutions:

  • The solution that we quickly implemented was to reduce the amount of feature points necessary to place a plane. Because it required less space and was more immediately responsive, kids found it easy to start setting it up. However, this results in more occlusion errors—something that we found to be a good trade off. This solution is also only a bandaid—the user flow is still not as easily understood by kids, which makes it harder to recover from errors. It doesn't address the problems that they actually have with the set up, and it doesn't address the storycopter problems.
  • Removing the storycopter. This trades a way to push kids to explore their space for ease of use. This would need to be offset by the stories themselves picking up the slack and pushing kids to fully explore their space.
  • Rethinking the entire AR set up. This is ultimately the solution that we will be going with—there are a few ideas already proposed—but it requires many resources and may not be released in the near future.

More Images


Colophon

Typefaces used are Caudex and Anaheim
Designed and coded by me, with extensive help from Stack Overflow


elen sila lumenn' omentielvo