Quantcast
Channel: maps for developers - Medium
Viewing all articles
Browse latest Browse all 2230

Uber Advanced Technologies Group engineer Natalie Afonina joins as HD Maps PM

$
0
0

By: Dan McSwain

Autonomous vehicles require such a level of precision in maps that without it, fully autonomous futures aren’t realistic. This is the basic concept behind the HD map and the reason that Natalie Afonina joined Mapbox.

Previously Natalie has taken on building bolting mechanisms for spacecraft, ice climbing where falling is not an option, and image processing for SAE Level 4 and Level 5 autonomous vehicle research.

Natalie’s ethos is summarized elegantly here:

“I’m drawn to challenges. I’m drawn to big ideas. I’m drawn to things where people say, ‘This is nearly impossible.’ or ‘How are you going to get around that constraint?’ And I think that everything’s possible. We’re going to solve the problem, we just need to find the right combination of creative approaches. What’s feasible, cost-effective, gets us to the end goal, and actually contributes to the world in some way? That’s one of the main reasons I chose to come to Mapbox, because of the opportunity to think big and have the support to implement. It’s not just drafting white papers and saying, ‘Oh this is a theoretical concept of how you could do this,’ but actually building things one detailed small step at a time to achieve the larger vision.”

At the core of Natalie’s passion lies an insistence that the map is the fundamental component of autonomous movement capabilities, and that a linear approach to building that map is unlikely to succeed. This perspective led her to Mapbox, where she says a leveraged approach will allow the HD maps team to take on challenges that are both immediate and tangible while building the foundation for a map moonshot to come.

Natalie is confident and opinionated, and her perspective on challenges is inspirational. Enjoy this Q&A, then sign up to hear more from Natalie in an upcoming episode of On Location, “Robots need maps, just like humans do” on October 22nd.

Your work and your personal pursuits seem to require intense focus. So how do you mentally prepare to tune in? Do you have a ritual?

My physical and mental activity of choice is ice climbing which, when compared to traditional rock climbing, requires another level of focus. The only rule is, “do not fall.” When you climb rock, it’s okay to fall occasionally. Not so when scaling frozen waterfalls with sharp tools in your hands and crampon claws on your feet. If you fall in that terrain, you’re lucky to escape without life-altering injuries or worse. People assume that I’m an adrenaline junky, but friends close to me know I’m the exact opposite of that. I’m a planner, risk-mitigator and control freak who plans every minute detail from my playlist jams to how many calories I can get away with bringing. I don’t feel fear or waves of adrenaline when I climb, because if I do, that means something is wrong. So both in my climbing and at work, I focus on zooming in to pay attention to the micro-details and zooming out to keep an eye on the bigger picture. You need both to be successful.

You’re obviously drawn to the challenge. What’s a problem that you looked at, or maybe even still look at, and say, “This is really hard?”

One of the hardest engineering challenges for me was building a specialized device for a multi-year polar ice research project.

I was working with a polar ice researcher at my university and designed a device that she was going to use in her research up in the Arctic. She was going to collect and study ice cores to make more accurate climate models. It was a topic that is near and dear to my heart.

We were up in Barrow, Alaska (the northernmost point in the US) for two months. It’s very dark and very cold. It got down to -55 degrees F and we were doing field work every single day. I was the sole engineer on the team, so the research scientists were relying on me for the success of their project and their multi-year grant. The research was hinging on my devices being able to function in these extreme polar environments. I had taken those requirements into account when I was designing the systems, but it’s hard to design electrical systems to withstand such cold. The plastic sheaf around the wiring was so fragile at -50F that it would crack and crumble in my hands, and I had to be careful not to handle metal without gloves or else I’d get frostbite. You had to be very delicate with all the electronics, which is not an easy task bouncing on a snowmobile over ice floes.

Polar bears were also a legitimate threat to the point where we hired a local polar bear guard to accompany us. We had an Inuit hunter who went out with us every day onto the Arctic sea ice. His one job was to stand on an ice flow with a gun and watch out for polar bears while we were drilling our ice cores.

In your last role at Uber Advanced Technologies Group, tell us about some of the hard challenges in the world of autonomous robotics?

I’ve spent the last two years in the Level 4 / Level 5 autonomous robotics world. I came to Mapbox looking for new and different approaches to some hard problems because what I noticed was that everyone’s approach to a mapping strategy was the same and short-term focused. No one was doing anything different and they all thought they had some top-secret sauce, but I haven’t seen any evidence of that. It’s mostly the same people going round and round the same ten companies, so the approaches felt both stale and like there was an opportunity to think through the challenges through a different lens: a mapping lens compared to a standard robotics one.

I believe the classic approach of creating highly precise HD maps for Level 4–5 autonomous vehicles (AVs) works for small geofenced geographies, but not when we’re talking about driving on every road in North America. The current approach is not going to scale linearly, because the mapping complexity and challenges balloon at scale.

Let me walk you through “HD map creation 101” to illustrate my point: HD maps are currently created by turning an expensive self-driving car into a mapping vehicle equipped with fancy sensors. That car then drives an area of interest and maps it out to sub-10-centimeter precision (meaning you know where every static object, every garbage can, stop sign, lamp post, pothole, and bike rack is located to within a width of your hand). Then there’s a multi-week processing and QA step that’s very manual to stitch this data together. If you’re lucky, and there wasn’t some new construction on the road you’re driving (or else you have to start over), a few weeks later you finally have an HD map you can put onboard a test vehicle.

This approach scales to five miles, ten miles, 50 miles…which is the scale these AV companies are operating in right now. To keep these maps updated, throwing people at the problem is a viable option today because the scale we’re talking about is small. This is why I don’t think HD mapping startups have found product-market fit, because the ten or so Level 4–5 AV companies out there a) want custom control of their map, b) don’t want a pre-packaged HD map with a bow on top, and c) can throw humans at the problem for the near future.

But when you start thinking about how this process will scale if you want to drive every road in the United States, you quickly run into issues of coverage, accuracy, and recency. If your map is both expensive and takes several weeks to create, the world will have changed enough in that time to render that map unusable in large parts, especially urban environments where we know, construction and road networks change day in and day out. There’s this great video of how Japan’s ground shifted in all three dimensions by several meters during the 2011 earthquake. If you had self-driving cars using HD maps the way AV systems currently use them, your entire fleet would be grounded until you remapped all of Japan.

Example of how Japan’s ground shifted in all 3 dimensions after the 2011 earthquake.

I don’t think many people understand that the HD map is as core to the functionality of Level 4–5 autonomous cars as the LIDAR sensor is. As humans, we use maps to give us general guiding directions. Autonomous vehicles use maps very differently. We even call it the ‘map sensor’, because the data it encodes is relied upon by the AVs as ground truth. They offload a lot of computation onto the map so that the AV doesn’t have as much stuff to process in real time. It greatly simplifies hard autonomy challenges. If you pulled the map as a resource from any of these Level 4–5 vehicles today, they would not function.

You know how there is the debate Elon Musk is known for asking “Do you need LIDAR for self-driving or not?” I think a better question is, “Do you need sub 10cm HD maps for full self-driving or not?”

Was it in your work on Level 4 and Level 5 autonomy that you became so interested in the map, or was the map what led you to the work?

It was the work that led me to the short-comings of current HD mapping strategies. I spent the last year building out a Simulation platform, which is basically a synthetic Grand Theft Auto world where the self-driving cars could practice for their DMV driving test. I loved my work because there was always something new to learn and it was my job to know how the system worked in its entirety. Ingesting the HD map in our simulator was a core input because the map is a fundamental component of the autonomy system. This work integrating the map into the simulator and learning more about the end-to-end autonomy system gave me a good look into how maps are used in Level 4-5 autonomy systems. The map is used everywhere, and a bunch of really hard autonomy challenges can be offloaded onto the map so the car’s perception system doesn’t have to deal with them in real-time.

This is a controversial opinion, but I think the way HD maps are currently used in Level 4–5 AV development is a crutch and not sustainable if we’re talking about being able to drive anywhere, everywhere. You can make it look like your AV system is capable of handling highly sophisticated scenarios and performing advanced computer vision tasks in real time on the fly (such as complex intersections, night driving), when in fact a lot of the difficulty of navigating such a scenario is precomputed and you’re confirming what’s already statically encoded in the map. I’m not discounting the complexity of dealing with the dynamic world (tracking humans and other cars is hard, especially when there are a lot of them and you’re moving fast), but I think we need to understand that this approach will not scale.

More of the same thing is not the answer. The current approaches seem hacky and fragile to me. I’m a systems thinker (the book Antifragile is a great overview of how I think about this topic) and when I look at how HD maps are being used in AV systems today, it yells “fragility and non-robustness.” Similar to how there is no room for error when ice climbing, there is no room for error when designing autonomous systems for transporting people. A functioning AV system shouldn’t have to depend on every minute detail of the static world being precisely mapped out with 100% accuracy.

My view is that these highly-precise HD maps are a step along the way to get us to full self-driving. However, the future of HD maps will move in the direction of crowdsourced real-time updates, a conflation of multiple data sources to achieve coverage, and autonomy systems that are robust enough to gracefully handle map imperfections without error-ing out.

What drew me to Mapbox is that it has all of the building blocks and expertise necessary to build the future of HD mapping. When I saw the Vision SDK in action, I was blown away. There are only a handful of companies I’ve seen match its level of sophistication and capabilities. That’s when I knew we had something unique. We have the talent and proven tech chops to build systems that deal with the hard questions of: “How do you stitch together 2D and 3D views of the world? How do you conflate data? How do you merge satellite, aerial, street views, telemetry traces with real-time information all into one data source that you then serve to a bunch of endpoints?” The team understands the difficulties of mapping, stitching different data sources, rendering, serving the data, performing live updates, creating new standards and schemas and all of the other nuances and complexities of dealing with live crowdsourced location data. This is what Mapbox is known for! This area is a blindspot and vastly underestimated by folks in the autonomy world. These are really hard things to do, and even more so to do reliably, efficiently and at scale.

Are we living in a simulation?

I’m torn on this one… there’s a part of me that that says, yes we do. Working on simulators you get to glimpse the future of truly immersive VR and I don’t think it’s too hard to imagine it becoming difficult to distinguish simulation from reality in some weird Matrix-like dystopia. I also have this habit of reading philosophy books (Gödel, Escher, Bach: an Eternal Golden Braid is a personal favorite) that make you think long and hard about systems and free will and the mathematical underpinnings of the universe. Breakthroughs in quantum physics and their applications in quantum computing also show us the world is a bizarre multi-dimensional space with weird properties that we are just beginning to understand. Whether or not we live in a Simulation is a fun fact to ponder, but doesn’t impact how I live my day-to-day life.

Dan McSwain - VP Brand Marketing - Mapbox | LinkedIn


Uber Advanced Technologies Group engineer Natalie Afonina joins as HD Maps PM was originally published in Points of interest on Medium, where people are continuing the conversation by highlighting and responding to this story.


Viewing all articles
Browse latest Browse all 2230

Trending Articles