Bringing faster machine learning to billions of Arm enabled edge devices
By: Eric Gundersen
To power live location, we’re partnering with Arm and delivering the Vision SDK to Arm’s galaxy of components and hardware — including CPUs, GPUs, Machine Learning, and Object Detection processors.
The Vision SDK puts developers in control of the driving experience. Our partnership with Arm will extend the Vision SDK’s reach to their hundreds of millions of devices. As new data is detected, the SDK classifies road boundaries, lane markings, curbs, crosswalks, traffic signs, and more. This data is then all used to update the map live to ensure the most up-to-date information.
Arm is one of two partners announced today at Locate and is a great complement to the Microsoft Azure IoT Edge and Azure Cognitive Services in the cloud.
“Millions of developers use Mapbox products that will continue to impact many markets. We look forward to working with them to bring the power of machine learning to the edge.” — Jem Davies, Vice President, Fellow and General Manager, Machine Learning, Arm
With the Vision SDK and the Arm-enabled devices already in the hands of billions of people, we’re able to segment and extract features from the environment more than ten times a second. We can now take inputs from sensors that ship on every smart phone and process it all on the device.
Last year, over 20 billion Arm-based silicon chips were shipped, powering everything from sensors and smartwatches, to smartphones and cars. They are the near-ubiquitous processing platform on the edge of the internet and responsible for 100x increase in processing performance in devices over the last 10 years.
Arm offers not only access to billions of chips on the market, but their Machine Learning platform, Project Trillium, enables advanced, ultra-efficient inference at the edge with the highest performance across the widest range of devices today. Specifically designed for ML and neural network capabilities, the architecture is versatile enough to scale to any device, from IoT to connected cars and servers. The recently announced ML processor delivers more than 4.6 trillion operations per second (TOPs) for ML workloads.
Their Object Detection processor can detect objects from a size of 50x60 pixels upwards and process Full HD at 60 frames per second in real time. It can also detect an almost unlimited number of objects per frame, so if you’re processing road data, for example, dealing with the busiest intersection is no problem.
Our Vision SDK is currently available in closed partnerships but will be opened in public beta in September. Visit our page to learn more and sign up for updates.
Arm and the Mapbox Vision SDK was originally published in Points of interest on Medium, where people are continuing the conversation by highlighting and responding to this story.