By: Adam Debreczeni
There’s a reason the AR experiences you’ve used are siloed. Having multiple users interact and collaborate in real time is a big technical and design challenge. Current devices are unaware of their surroundings other than the basic planes they detect, and don’t know the position of other devices. Every time you start an AR session, everything resets. These are big problems.
I’ve always felt that the tools with the most impact allow people to collaborate and share ideas. After joining Mapbox, I wanted to tackle this. Our team has created the first multi-user AR experience, and it’s built using the Maps SDK for Unity.
Starting with physical tools
Years ago, if you wanted to plan a trip with a friend, you would roll out a paper map. When you highlighted a route or pinned places, it was immersive and tactile. Our entire lives we’ve been training our hands to engage with the physical world, and we didn’t want to abandon those innate interactions as we designed a UI for AR. We wanted the experience to feel immediately familiar, like picking up a highlighter and annotating a map.
This led us to some challenging design explorations:
- What does it look like to annotate a digital map with a friend in real time?
- If you’re together, how do you have more than one input device?
- If you’re in a different city, how do you share both of your screens?
- How do you manage the limitations of 2D annotation tools on a flat surface like a phone? It’s difficult to be precise with your thumbs.
Multiple people in AR
Apple’s ARKit and Google’s ARCore make it possible to build AR experiences with everyday devices and distribute them to billions of people. However, their crude tracking was a big obstacle for building out the collaborative aspect of our demo.
Our solution was to have the devices report back which plane they detected. By using the distance, angle, and position of the selected plane, we can build up a 3D model of the device’s relative position. When we share this data between devices, they know relatively where they are to each other.
We used the Maps SDK for Unity to anchor the 3D model of the devices to a map. Using the SDK we projected the map and displayed points-of-interest that the players (represented by astronauts) can explore.
When we build the demo on a phone, we can experience this in AR.
Because the communication is handled by a server, this demo also works when the players are in different places.
3D interactions
With ARKit and ARCore, the device is both the screen and the controller. To select a point-of-interest on the map, it seems intuitive to tap where it is on the screen. Translating this tap on a flat surface and projecting it like a laser-pointer into a 3D environment is called ray-casting. Traditionally, that method works well when there’s a precise controller, but fingers on a phone screen don’t have that same level of precision.
We expanded the concept of ray-casting by projecting a cone that increases its area with distance as opposed to projecting a laser of a fixed-size.
This allows for precise targeting when a virtual object is close and imprecise targeting when a virtual object is far.
In the coming weeks, we will expand this demo even further, and plan to release it as an open-source project. If you want to incorporate maps and location into your AR projects, download the Maps SDK for Unity and explore our tutorials to get started.
If you want to work on projects like this one, we’re hiring.
Special thanks to Lauri Rustanius and Jim Martin for the hard work on this.
Asset sources: Astronaut, Pin, Salt & Pepper, Cocktail, Coin, Rooster
Multi-user AR experience was originally published in Points of interest on Medium, where people are continuing the conversation by highlighting and responding to this story.