Quantcast
Channel: maps for developers - Medium
Viewing all articles
Browse latest Browse all 2230

Chris Anderson starts DIY Robocars: A space for self-driving developers

$
0
0

By: Eric Gundersen

Last week I ran into Chris Anderson, CEO of DIY Robotics, and spent hours talking about his latest project: Self-driving DIY Robocars. TL;DR Chris is creating a new surface area for developers to start working on self-driving car tech, once again pairing open source projects and community events to at the foundation of a new industry. I can’t stop thinking about how profound this movement is and asked Chris to share why he’s so excited about self-driving cars, robotics, working in China, and where he sees the future going.

Ok, WTF is a “Donkeycar”?

Donkey is one of the two standard autonomous car platforms that we use in the DIY Robocars races. Both are designed to cost less than $200 and can be assembled in a day. They use off-the-shelf components, such as RC car chassis and RaspberryPi processors, and are designed to be as simple as possible because “it’s not about the car!”. Instead, it’s all about the software. The two platforms represent the two main technical paths to autonomous cars: machine learning and computer vision.

Donkey is the machine-learning one, and it uses convolutional neural networks (CNNs) to learn how to drive by observing how a human driver completes a course and then “behavioral cloning” that by correlating what the camera sees with the steering inputs of the human. It does this by using Google’s TensorFlow CNN software (with a Keras front end), with the training down in the cloud (AWS) to generate a neural network model, which is then run in real-time on the car on a RaspberryPi 3.

Our other standard platform uses the same chassis, but instead of a RaspberryPi and machine learning, it uses an OpenMV computer vision board and is focused all on the “seeing part.” This is based on the OpenCV software, which looks for lines on the road, certain colors, and other shapes to determine where the track is.

Right now the two approaches are neck and neck, and both are likely to beat the fastest human within a few months:

Front facing cameras + sonar + LIDAR + radar + GPS meets TensorFlow and ROS — what does the toolchain look like for DIY Robocars?

We try to keep it simple, so the platforms are just a camera and processor on board (either a RaspberryPi camera and a Rasperry Pi 3, or an OpenMV camera/processors combo board).

For the RPi (ML) version, the software toolchain is as follows:

  • On the car: Standard RaspberryPi Linux, with OpenMV and TensorFlow and some Donkeycar python libraries
  • On your laptop: Similar: TensorFlow, Keras, Donkeycar app
  • In the cloud: TensorFlow and Keras

The flow is as follows:

  1. Drive the car manually, controlling it with a Playstation controller or mobile web app while the software records the images and your input.
  2. Transfer those paired image/control image datasets to the AWS cloud.
  3. Train TensorFlow on that dataset, which generates a CNN model.
  4. Download that model to the RaspberryPi.
  5. Have the car run autonomously, with the CNN taking camera input in and generating steering and throttle commands out.

For the OpenMV (CV) version, the software is much more straightforward: just a Python script running on the OpenMV board.

If someone wants to build their own Donkeycar, where should they start reading and buying parts from?

For the machine learning version, you can start here. If you want the CV version head here (or go with the super-simple Minimal Viable Racer, which costs around $80, here).

What is the tech overlap between your work on self-driving cars and your day job building drone software?

The answer is “not as much as I expected”. Superficially, flying robots and rolling robots seem similar: both use sensors, processors, and code to sense the world and navigate within it. But drones, which typically operate in a wide-open 3D space (the air) outdoors, are primarily based on classic robotics: inertial sensing to gauge orientation with standard control theory to maintain a given attitude and GPS for navigation.

Cars, on the other hand, operate in a crowded 2D environment (roads) and cannot count on GPS alone to know how to navigate. So they need to use cameras, LIDAR, radar, and other ways to probe the environment around them, much as our own eyes do. This requires a totally different branch of robotics, one that is just maturing now: computer vision and AI/ML.

A decade ago, when the components inside an iPhone (MEMS sensors, ARM processors, GPS, wireless) made classic robotics cheap and easy, we were able to credibly put the letters “DIY” in front of “drones” and build a community that reinvented that industry from the bottom-up. Rather than “planes without pilots,” they were “smartphones with propellers.” Today, there are millions of advanced consumer and commercial drones in the air that came from that Silicon Valley/Shenzen re-imagining of the future of aerospace.

Today, a new generation of enabling technologies (RaspberryPi 3, TensorFlow, AWS) allows us to do the same thing: put the letters “DIY” in front of self-driving cars. Thus DIYRobocars. Sadly, we can’t reuse much of our drone software (which is now thriving in Dronecode, the open source industry consortium that I started and now runs as part of the Linux Foundation), for the above reasons. So we’re starting over with these more modern AI/CV-based approaches.

Although the technical foundations are different, we hope the effect will be the same. By adding “DIY” to autonomous cars, those letters bring with them some key differences from the rest of the industry:

  • Cheap ($200)
  • Easy (no special skills)
  • Safe (at 1/10th scale, nobody gets hurt and there are no people on board)
  • Legal (inside, not on street)
  • Fun (racing!)

But what’s the point of DIY-ing autonomous cars if some of the smartest, biggest companies in the world are already working on this? The answer is that we try things they can’t. Because we’re not carrying people, we can “move fast and break things” without much risk, and ideally innovate faster.

For a century, the car industry has innovated through racing; most of today’s car tech got its start on the tracks of Formula 1 or Monte Carlo. But with autonomous cars, it’s been mostly the opposite: driving slowly and cautiously. That’s why semi-autonomous cars on the streets today drive like little old ladies. We hope that our sub-scale, no-passenger approach to autonomy may reveal a different path to performance and safety, one more about nimbleness and aggressive avoidance of danger.

In short, just as the Homebrew Robotics Club created Apple, which started making the worst computer you could buy and ended up making the best, perhaps a “homebrew” approach to self-driving cars could inspire the same.

Do you think the automotive industry will start embracing open source tech more as they move into semi-autonomous driving?

It already is:

And dozens more..

Race your own Donkeycar at Locate. We just opened signups last week, and you can claim a spot for you and your team now. This week, we’ll launch our Donkeycar Maps SDK, giving each team 6 weeks to add HD Vector Maps, encoded with track geometry and libraries to decode the coordinates for the race trace. More technical details + judging criteria on the blog.

Eric Gundersen


Chris Anderson starts DIY Robocars: A space for self-driving developers was originally published in Points of interest on Medium, where people are continuing the conversation by highlighting and responding to this story.


Viewing all articles
Browse latest Browse all 2230

Trending Articles