Quantcast
Channel: maps for developers - Medium
Viewing all 2230 articles
Browse latest View live

Mapbox.js v1.6.3: better geocoding integration, improved attribution, and fixes

$
0
0

Mapbox.js v1.6.3 is now available, with better support for geocoding, an updated attribution UI, and code changes that will make coders happy.

Check out the new API documentation and update your sites:

<script src='https://api.tiles.mapbox.com/mapbox.js/v1.6.3/mapbox.js'></script><linkhref='https://api.tiles.mapbox.com/mapbox.js/v1.6.3/mapbox.css'rel='stylesheet'/>

Geocoding

Our geocoding web service has been improving fast, so we’re tuning the UI to match.

Mapbox.js’s low-level L.mapbox.geocoder interface now supports multiple placenames as an array and automatically uses the batch geocoding interface to process them in a single request.

varg=L.mapbox.geocoder('http://api.tiles.mapbox.com/v3/username.mapid/geocode/{query}.json');g.query(['austin','houston'],function(err,res){console.log(res);});

Querying ['austin', 'houston'] returns an array of results to use in your application.

[{"query":["austin"],"attribution":{"mapbox-places":"<a href='https:\/\/www.mapbox.com\/about\/maps\/' target='_blank'>&copy; Mapbox &copy; OpenStreetMap<\/a> <a class='mapbox-improve-map' href='https:\/\/www.mapbox.com\/map-feedback\/' target='_blank'>Improve this map<\/a>"},"results":[[{"id":"mapbox-places.78701","bounds":[-98.026183951405,30.067858231996,-97.541547050194,30.489398740398],"lon":-97.804206,"lat":30.278855,"name":"Austin","type":"city"},{"id":"province.1000418602","name":"Texas","type":"province"},{"id":"country.4150104525","name":"United States","type":"country"}],[{"id":"mapbox-places.89310","bounds":[-117.80679434924,38.622466995102,-116.59003701732,39.998716879109],"lon":-117.227194,"lat":39.313976,"name":"Austin","type":"city"},{"id":"province.2975076950","name":"Nevada","type":"province"},{"id":"country.4150104525","name":"United States","type":"country"}],[{"id":"mapbox-places.16720","bounds":[-78.368774898268,41.39832726178,-77.826147016892,41.736582702219],"lon":-77.988041,"lat":41.567676,"name":"Austin","type":"city"},{"id":"province.2184819983","name":"Pennsylvania","type":"province"},{"id":"country.4150104525","name":"United States","type":"country"}],[{"id":"mapbox-places.55912","bounds":[-93.169104983099,43.526714505771,-92.768108016847,43.82098238182],"lon":-92.929212,"lat":43.674029,"name":"Austin","type":"city"},{"id":"province.4222030107","name":"Minnesota","type":"province"},{"id":"country.4150104525","name":"United States","type":"country"}],[{"id":"mapbox-places.72007","bounds":[-92.120394979369,34.907955633164,-91.838942016876,35.076324303227],"lon":-92.004189,"lat":35.036604,"name":"Austin","type":"city"},{"id":"province.3855330187","name":"Arkansas","type":"province"},{"id":"country.4150104525","name":"United States","type":"country"}]]},{"query":["houston"],"attribution":{"mapbox-places":"<a href='https:\/\/www.mapbox.com\/about\/maps\/' target='_blank'>&copy; Mapbox &copy; OpenStreetMap<\/a> <a class='mapbox-improve-map' href='https:\/\/www.mapbox.com\/map-feedback\/' target='_blank'>Improve this map<\/a>"},"results":[[{"id":"mapbox-places.77002","bounds":[-95.720458982945,29.528915261206,-95.061201018565,30.040369645345],"lon":-95.436742,"lat":29.784969,"name":"Houston","type":"city"},{"id":"province.1000418602","name":"Texas","type":"province"},{"id":"country.4150104525","name":"United States","type":"country"}],[{"id":"mapbox-places.38851","bounds":[-89.138252983117,33.79659653948,-88.791614114357,34.045110442751],"lon":-88.976638,"lat":33.920944,"name":"Houston","type":"city"},{"id":"province.788686416","name":"Mississippi","type":"province"},{"id":"country.4150104525","name":"United States","type":"country"}],[{"id":"mapbox-places.55943","bounds":[-91.73075598294,43.630618500142,-91.392928018125,43.933747469141],"lon":-91.553669,"lat":43.782375,"name":"Houston","type":"city"},{"id":"province.4222030107","name":"Minnesota","type":"province"},{"id":"country.4150104525","name":"United States","type":"country"}],[{"id":"mapbox-places.65483","bounds":[-92.088466979783,37.205003976875,-91.841178018183,37.429956007523],"lon":-91.92426,"lat":37.317564,"name":"Houston","type":"city"},{"id":"province.3294535744","name":"Missouri","type":"province"},{"id":"country.4150104525","name":"United States","type":"country"}],[{"id":"mapbox-places.35572","bounds":[-87.399124897841,34.06685573423,-87.204290047982,34.300837420778],"lon":-87.264654,"lat":34.183928,"name":"Houston","type":"city"},{"id":"province.2667756795","name":"Alabama","type":"province"},{"id":"country.4150104525","name":"United States","type":"country"}]]}]

The control interface to geocoding is improved too: alternative positions, like topright or bottomleft now rearrange the UI to fit properly.

map.addControl(L.mapbox.geocoderControl('username.mapid',{position:'topright'}));

Attribution

In an effort to feature our map sources better, Mapbox.js now displays attribution information expanded by default instead of collapsed behind a button icon. This is the intended attribution for any map of regular to large screen size. This is particularly designed to feature one of our most important map sources, OpenStreetMap, better. The default content of the attribution control continues to feature an Improve This Map link designed to help map users quickly update map data where needed.

The L.mapbox.infoControl API is still available as an option for maps with long attribution or smaller viewports:

varmap=L.mapbox.map('map','examples.map-8ced9urs'{attributionControl:false,infoControl:true});

API Improvements and Bugfixes

This release also includes numerous documentation improvements and bugfixes: chief amongst them a fix for vector layers in IE8.

We’re now exposing traditional constructors from the Mapbox.js interface. Leaflet’s pattern is that each object can be initialized with new or without:

// traditional constructorvara=newL.LatLng(0,0);// 'magic' constructor'varb=L.latLng(0,0);

Previously, Mapbox.js only exposed the latter form: for instance, L.mapbox.tileLayer never requires new. In 1.6.3, we’re including both: L.mapbox.TileLayer is exposed in the API as well. Why would you want this change? With traditional constructors, you can use Leaflet’s L.Class function to add new functionality into existing code.


To read the full list of changes, check out the changelog on GitHub.


A Rosetta Stone for Google to Mapbox APIs

Developing Open Source Software for Processing Mosaics

$
0
0

We have been extending our satellite mosaicking production infrastructure while working on the next version of Cloudless Atlas. The new version of Cloudless Atlas is using Landsat imagery, giving us a half-trillion new cloud-free pixels down to zoom level 12. We have focused on open sourcing key parts of our imagery processing pipeline so that we can work together with other players in the satellite space. A big part of this new processing work is Rasterio, which I wrote for pushing and pulling raster data — all in Python.

Preview of the New Cloudless Atlas, South Africa

Imagining the Cloudless pipeline

When building the Cloudless Atlas pipeline, we knew we would be using the GDAL library and its utilities extensively. GDAL is fast and dependable and open source. Open source was going to be important for the Cloudless Atlas effort. For the kind of extremely flexible pipeline we needed (in terms of rapid development, retargeting, and so forth), licensed software wasn’t even an option.

It was also clear that while we would be using GDAL workhorses like gdal_translate and gdalwarp in the pipeline, the uniqueness of our final product and the need to scale in production would require some new software development. And finally, we were certain that we would being using Python quite a bit, if only to build prototypes and experiment with algorithms, and would be using Numpy in particular. (Python and Numpy are part of the satellite team’s special sauce.)

To get satellite imagery into Numpy arrays for use with Python, we were going to need our own bridge between GDAL and Python. GDAL has useful Python bindings, but we took the opportunity to design a more fun, more productive, and more forward-looking alternative.

First steps for Rasterio

Rasterio began as a small module of functions to read Numpy arrays from GeoTIFF files and write arrays back to other files. It had a rasterio.open() function modeled after Python’s built-in open(), with familiar file modes like ‘r’ and ‘w’. We gave the dataset object returned by rasterio.open() a matching close() method to make resource handling more deterministic when working with enormous datasets. The read_band() and write_band() methods of dataset objects use Numpy arrays for instances of band data, replacing Band objects entirely and eliminating one of GDAL’s biggest Python gotchas: band objects that reference deleted dataset objects.

Rasterio did not wrap GDAL’s Python bindings, but usedCython, a Python-like language and generator of Python C extension code, with the GDAL C API. Cython gave Rasterio strong performance from the start and let us write less code. Other projects likePandas use Cython for the same reason.

Early on, Rasterio moved to a public Mapbox repository and became one of Mapbox’s many open source projects. This let it catch on quickly with other Python programming raster data analysts. Asger Petersen contributed the ability to read and write raster data subsets. Early prototypes of Cloudless Atlas pipeline components soon ran quickly and smoothly enough to convince the satellite team that Rasterio could replace the default GDAL Python bindings entirely.

Cloudless-driven features

As the new Cloudless Atlas pipeline was built out, it was not long before Rasterio needed to read and write raster bandmasks,color tables, andmetadata, and to configure GDAL’s drivers within programs. A few exploratory side-projects resulted in fun features like feature extraction and array reprojection.

To make diagnosis of file problems easier, I wrote a program to let the satellite team lift the hood of any GDAL-supported file: rio_insp.

(rasters)MapBox-FC:rasterio sean$ rio_insp tiff-error.tif
Rasterio 0.8 Interactive Inspector (Python 2.7.5)
Type "src.meta", "src.read_band(1)", or "help(src)" for more information.>>> src.meta
{'count': 7, 'crs': {u'a': 6378137, u'lon_0': 0, u'k': 1, u'y_0': 0, u'b': 6378137, u'proj': u'merc', u'x_0': 0, u'units': u'm', u'no_defs': True}, 'dtype': <type 'numpy.uint16'>, 'driver': u'GTiff', 'transform': [-7670726.02033, 30.0, 0.0, -2504574.35408, 0.0, -30.0], 'height': 2616, 'width': 2616, 'nodata': 0.0}>>> src.read_band(5)
array([[42501, 47716, 51599, ...,     0,     0,     0],
       [41325, 46362, 53272, ...,     0,     0,     0],
       [40202, 49743, 53713, ...,     0,     0,     0],
       ...,
       [33249, 29334, 28306, ..., 43227, 42597, 41707],
       [28848, 30354, 29512, ..., 41077, 41248, 40642],
       [28109, 27293, 26299, ..., 44889, 41707, 38990]], dtype=uint16)>>> band5 = _>>> band5.min(), band5.max(), band5.mean()
(0, 65535, 26459.872859268766)>>> show(band5, cmap='pink')

Every pixel in the final Cloudless Atlas product, and every one of the many more pixels that did not make the final cut, passed through Rasterio’sread_band() and write_band() methods. Rasterio has proven as fast and dependable as the GDAL it’s built on.

Investing more in Rasterio

Rasterio continues to grow from outside Mapbox as well as inside. Brendan Ward contributed the inverse of feature extraction:rasterization. His pull request validated for me the choice of Cython. And now not only did Rasterio have another developer writing a bunch of Cython code, but one who was doing the development and testing on Windows. Yes, Rasterio works on Windows!

Rasterio is designed to be worth Python programmers' time. Its adherence to Python patterns and idioms makes it easy to learn and use. It is well tested, cross-platform, ready for Python 3, and proven on a large scale image processing project. Please try it out, watch its commits, file bugs, and contribute code!

At the onset of the Cloudless Atlas effort, the satellite team had a few broad objectives. We wanted an excellent product, we wanted a killer processing pipeline, and we wanted to share both the tools and tradecraft that we developed along the way. Enjoy making great maps with Mapbox’s Cloudless Atlas, and join us in making Rasterio the best possible raster data library for Python.

Feel free to get in touch with me about Rasterio or anything related – email me! or just ping me on twitter @sgillies

Alternative Terrain Visualization

$
0
0

We have been working on creating striking visualizations of terrain, including most recently, Mapbox Outdoors. These maps utilize contour lines and hill shades to show topography, and are as beautiful as they are useful. Sometimes it is fun to head in a more abstract direction, and revel in a creative expression of Earth surface patterns.

Golden Gate Park and its PanhandleGolden Gate Park and its Panhandle

Here I’ve used an alternative technique, depicting the hills and buildings of San Francisco in an exaggerated isometric view. This visualization utilizes data from DataSF and the USGS, and shows heights as a series of horizontal lines, an aesthetic inspired by the album cover of Joy Division’s Unknown Pleasures.

The Financial DistrictThe Financial District

San Bruno MountainSan Bruno Mountain

Damon Burgett Joins Mapbox

Mapbox Android SDK

$
0
0

Today Mapbox joins the other 80% of mobile devices. Mapbox Android SDK is ready for your app.

Our new SDK is an open source, flexible, and fast platform for maps on Android. We’re making it easy for Android developers to add custom maps and data to their apps. It’s more than just a library for Mapbox: you can use other tile providers, datasources, and build your own plugins on its solid base.

The source is open at GitHub, and if you want to kick the tires, we’ve updated the example app in the Play Store.

This is a massive undertaking. It started as a fork of osmdroid, another open source project for maps on Android. Since then, we’ve removed tens of thousands of lines of unused code, switched to make use of stable modular parts like OkHttp, and expanded tests and documentation. Amongst the many user-facing changes:

  • Flexible interfaces for tooltips, layers, and overlays
  • TileJSON support for tile layers
  • Markers API for custom markers
  • HTTPS security for tiles and data
  • Compositing for multiple tile layers
  • Dragging & panning gestures tuned for performance & user experience

You can include the SDK in your existing project via the Sonatype Central Repository, or include it as source — see the project’s README for the full details.

This wouldn’t be possible without many contributors -Brad Leege, Francisco Dans, Martin Guillon and many others put in hard work testing & building this project.

Debanding the world

$
0
0

In 2003 machinery on Landsat 7 broke. This malfunction now causes stark black diagonals across all images. Despite the malfunction, NASA kept the satellite in operation; it’s still collecting imagery, but at 80% of its intended throughput.

This imagery is incredibly valuable. It is the primary source used by the Satellite team for the next release of our Cloudless Atlas. Part of our job is to remove artifacts stemming from this Landsat 7 hardware malfunction.

After the malfunction Landsat 7 only collects 80% of its intended throughput

Unfortunately, the malfunction propagates through our cloudless pipeline. Certain regions of the world have far less pixels due to the diagonals, resulting in banding artifacts that coincide with the Landsat 7 null regions.

Background

A Fourier Transform is a function that decomposes a signal, such as an image, into waves of decreasing amplitude. Since the banding artifacts are periodic, its wave component will accumulate over the entire image, and appear as a coalesced point of large amplitude.

Left: Example of an image affected by bands. Right: Corresponding FFT power spectrum.

Left: Stripes. Right: Corresponding Fourier transform. Stripes are seen as dots on each side of the center.

Left: Inclined Stripes. Right: Corresponding Fourier transform. Dots rotate to match the angle in the Fourier space.

The approach

Transform the image

Every declouded tile from across the world is transformed using a Fast Fourier Transform (FFT). The peaks that correspond to the bands can be subtle to the eye, and vary in inclination and strength.

Banding artifacts are marked by red circles

An algorithm to deband these images is becoming clear:

  • Transform the image using a Fast Fourier Transform
  • Locate the artifact peaks
  • Remove the artifacts
  • Transform the image back using an inverse transform

Metadata is important

Before searching the image for artifacts, we use metadata from EarthExplorer to narrow the search region. The inclination of the artifacts is calculated using the location of the image on Earth. We then calculate the diagonal region on the power spectrum where the bands will be located. This step is crucial since it greatly reduces the number of pixels to search, and filters other false positives.

The black region designates the calculated search area

Search for artifacts

Though using a Fourier representation helps manage this problem, it also introduces complexity. The Fourier image is volatile. It has a large dynamic range, often causing sharp changes between neighboring pixels. Before searching for artifacts, the image needs to be smoothed. Using an image processing technique called dilation, the overall image noise is reduced and the artifacts are accentuated. The metadata-guided region then goes through an off-the-shelf peak finding algorithm from the scikit-image Python library.

Each marker is a potential artifact

There are many potential artifacts to investigate. From a visual inspection we know that not all of these are artifacts; the false positives need to be filtered out.

Which are the real artifacts?

By looking at many images there are a few points known about the banding artifacts:

  • they are near symmetric between the upper right and lower left quadrants
  • the maxima nearest the image center are often related to banding
  • they lie along a diagonal line

Using this information we can start filtering out maxima that are not related to banding. Each maxima in the first quadrant should have a counterpart in the third quadrant; the first step is to filter all maxima by symmetry.

Next up is to locate the two maxima nearest the image center. These are pretty often (but not always) related to most of the banding. The line that passes through these maxima should be near the same inclination as the one derived from metadata. If the inclination of this line is not sufficiently close to the expected angle, then the next two nearest maxima are selected until there is a match. Once the two angles are sufficiently close, we use the line as another filter criterion. The distance to the line is checked for every maxima. Those far from the line are filtered out, while those near the line are kept.

Maxima filtered by distance to artifact line

One last step is made to ensure that the remaining maxima correspond to artifacts. The maxima and their reflected values (reflected across the x axis) are compared. We expect that false positives will have similar values, while true artifacts differ by a large amount. Only maxima with a large difference from their reflected values are kept.

Correction mask

At this point we have an array of points, each corresponding to an artifact peak.

[(1312, 1673), (1322, 1594), (1333, 1517), (1347, 1438), (1369, 1278), (1383, 1199), (1394, 1122), (1404, 1043)]

Peaks near the image center are stronger and wider than those far from the center. The radii used in the mask must shrink as we move away from the image center. The right size for these radii has been an important topic for the entire debanding process. Using radii that are too large can introduce other artifacts including

  • saturated pixels or streaks
  • image blur
  • additional banding

To minimize the introduction of these artifacts we examine a patch around the nearest peak.

The patch is smoothed using a Gaussian filter (remember the data is volatile!), and a profile through the image center is taken.

For each pixel, the percentile over the patch is computed.

Patch Percentiles Over The Profile

The x axis represents the pixel number along the profile, while the y axis is the percentile. The drop off on the right side shows the fall off once we leave the area around the artifact peak. The distance from the image center to this fall off gives us the radius for the two nearest peaks. In this case the fall off happens around x = 14 or 6 pixels from the patch center, so we use a starting radius of 6 pixels.

All subsequent radii are gradually reduced to create the mask below.

438_280-correction-mask

Final power spectrum mask to remove the bands.

Check our work!

Check our work like in the good ole math class days

Now that we have the correction mask, it’s applied to all bands (spectral bands) of the image. Each band is transformed to its Fourier representation, the correction mask is applied, then it’s transformed back to its original form using an inverse Fourier transform. We compare some simple statistics (mean, median, standard deviation) between the original pixels and the corrected pixels, and if there’s a large deviation, we opt to bypass debanding. In many cases the change in statistics is subtle, and only related to artifact removal. This is the desired result, and produces higher quality images.

438_280-debanded

Example image after being corrected for SLC-off bands.

Temperature and terrain cartography

$
0
0

Our new vector tile compositing technology opens possibilities for streaming data into maps. This week I added gridded temperature data from our friends at Weather Decision Technologies into our live data pipeline. My goal was to integrate WDTs new real-time temperate data into a custom version of Mapbox Outdoors - the bigger picture idea is for us to open up live data streams from partners to anyone designing maps on Mapbox.com.

These images are a snapshot from Tuesday morning - capturing temperatures across the continental United States as the morning warmed up.

Temperature and terrain cartography

Our new vector tile compositing technology open possibilities for streaming data into maps. This week I added gridded temperature data from our friends at Weather Decision Technologies into our live data pipeline. My goal was to integrate WDTs new real-time temperate data into a custom version of Mapbox Outdoors - the bigger picture idea is for us to open up live data streams from partners to anyone designing maps on Mapbox.com.

These images are a snapshot from Tuesday morning - capturing temperatures across the continental United States as the morning warmed up.

The sun was just coming up in the Northwest, and for the highest elevations, temperatures were still below freezing. Vector tile compositing allows map makers to draw labels above updating weather data.
A close up of the cool morning in Seattle.
Far to the south, temperatures were already surpassing one hundred degrees in Death Valley.
A wider view of the Southwest's alternating mountains and valleys - known as Basin and Range - show hot valleys but cooler temperatures at higher elevations. Vector tile compositing not only make correlations between temperature elevation apparent, but also allow for the use of Photoshop-like compositing operations that highlight such correlation.
A cold front moved through the Houston metro area, pushing cooler air into the Gulf of Mexico. Blurring functions in CartoCSS make gradations in temperature data appear organic and natural.
The cold front had not yet arrived in New Orleans, where humid air meant warm temperatures over land and water.
In the Northeast, colder air masses over Lake Ontario and the North Atlantic surround the warm air over land - bringing cooler weather to Maine and Boston, but not farther south.
All of South Florida was already heating up - even over the water.

Sign up to know when weather data goes live.


Turning diagrams from GPS

$
0
0

GPS logs can help ensure that Smart Directions match what someone with local knowledge would do in the same circumstances. One-way streets and turn restrictions are often obscured when you map raw GPS points, but this animation shows a series of steps that we take to get a clearer view of what is happening on each street and at each corner:

  • Start with the raw GPS points. The ones shown here come from a collection released by Uber.
  • Tighten up the GPS noise by snapping together points from different vehicles traveling in the same direction.
  • Offset the tracks by 90 degrees from the direction of travel to highlight the directional split on each street.
  • Connect lines around the corners to make right turns visible.
  • Average together a series of successive points from each vehicle to pull left turns back between the carriageways and make them visible through the middle of the intersection.

The stages of cleaning up GPS logs in San Francisco

Here’s another image showing the many possible looping turns between the one-way streets in a larger area of downtown San Francisco.

Looping turns on downtown San Francisco’s one-way streets

Camilla Mahon joins Mapbox

$
0
0

Image Analysist Camilla Mahon joins Mapbox in San Francisco! Camilla is joining as an intern during the summer on our growing satellite team, ramping up our work to support environmental remote sensing analysis, and documenting fully open-sourced raster pipelines.

Camilla is a student of Clark University, where she studies GIS & remote sensing for environmental applications. She has applied her skills to neighborhood development in New Orleans, land use management in New England, and more recently to issues of global climate change. She is interested earth systems science, land stewardship, and community development in a rapidly changing world. She also plays drums and bass. “Drum sticks are cheap. and then you can hit everything you want”.

Shrinking Lake Titicaca; via satellite

$
0
0

Lake Titicaca, the largest body of fresh water in South America, is rapidly losing water. As you can see from the imagery below, the lake is experiencing a significant decrease in water levels, due in part to human-induced climate change and increased consumption. Look at the imagery on the left taken a few days ago by NASA’s new Landsat 8 overlaid on Landsat 5 from May 1986. Swiping between these images, we see a distinctly different shoreline, with much more land exposed in 2014.

This reduction in water is critical because Lake Titicaca serves an estimated 2.6 million people a day with water for drinking and agricultural irrigation in both Peru and Bolivia.

The drying of the lake creates livelihood stability issues in the region, forcing local residents to walk farther for drinking water and necessitating a re-engineering of irrigation systems. If lake recession continues, the communities served by this massive body of water will need to invest in reservoirs and other rain catchment systems to fill the service gap left by the drying natural resource.

To see in more detail, try the full screen version!

Images were taken from the USGS repository for Landsat data, processed using all open source tools, styled in TileMill, hosted on Mapbox and embedded using Mapbox.js.

Introducing Mapbox GL

$
0
0

We just released Mapbox GL — a new framework for live, responsive maps in every iOS app. Now developers can have the most detailed maps sourced from ever-updating OpenStreetMap data, as well as the ability to fully control the style and brand to design maps that perfectly match their app. This is all done using our new on-device vector renderer, which uses OpenGL ES 2.0 technology for pixel-perfect map design, from antialiased fonts to polygon blurring, all hardware-accelerated and optimized for mobile devices — and all on the fly.

Introducing Mapbox GL

We just released Mapbox GL — a new framework for live, responsive maps in every iOS app. Now developers can have the most detailed maps sourced from ever-updating OpenStreetMap data, as well as the ability to fully control the style and brand to design maps that perfectly match their app. This is all done using our new on-device vector renderer, which uses OpenGL ES 2.0 technology for pixel-perfect map design, from antialiased fonts to polygon blurring, all hardware-accelerated and optimized for mobile devices — and all on the fly.

Toggle style

Maps render at a super high framerate — it's this speed that opens up a whole new class of apps with highly detailed maps featuring terrain and hillshades, seamlessly fading between different designs such as day and night views. The data is super lightweight for offline caching and totally dynamic — interacting with phone sensors like pedometers, heartrate monitors, iBeacon proximity, Apple's new HealthKit, and others — and changing the design on the fly. Our new toolkit is fully open source - all the code is on GitHub and we are actively writing documentation for developers to easily add maps to every app.

Better maps for iOS

We're not only making better maps for iOS, our platform is designed to let developers and designers customize everything - from the data to the look and the feel.

Fully vector

Mapbox GL renders the map on the fly and on the user's device, so you can rotate the map and zoom in and out fluidly, all with the text staying upright and the labels flowing smoothly between zoom scales.

Design control

Mapbox GL gives you the power to customize every aspect of your map, from tweaking the colors, to hiding or showing specific layers, to deciding what information you want to present on your map by rearranging our worldwide starter data or bringing your own. And the style is able to be manipulated live at runtime.

Seamless scrolling necessitates new ways of styling maps. While previously you could just define discrete values for all zoom levels in CartoCSS and TileMill, now you can define the road width or building opacity as a function that changes smoothly as you zoom in or out.

Since everything is rendered on the fly, you can instantly change the stylesheet — plus animated transitions between two different stylesheets are just as easy as adding and removing classes.

Design control extends to data layers, allowing smooth animation effects.

Open format

Mapbox GL is based on the same vector tile format that powers Mapbox Streets. This means that you can use our global basemap, fully or in part, as well as create your own vector tiles to interleave data from different sources. Much like in TileMill, our open source map design studio, you can create a custom map by composing the various roads, parks, water areas, buildings, and more from lines, points, and polygons, then style them flexibly.

Technology

We built Mapbox GL in C++11 using OpenGL ES 2.0, a subset of OpenGL that is available on mobile devices and that can also run on desktop hardware with very minor changes. We use protocol buffers via pbf.hpp to implement a lazy vector tile parser, plus we've implemented custom code for text display and layout.

Mapbox GL is open source under a permissive BSD license, so you can check out all of the code right now. It currently runs on iOS, OS X, and Linux.

First-class iOS citizen

For our first preview release, we're focusing on the iOS platform. On top of Mapbox GL we've built Mapbox GL Cocoa, a layer of Objective-C Cocoa bindings for native iPhone and iPad development. Changing the color of buildings is as easy as:

NSDictionary *buildingStyle = @{ @"stroke" : @{ @"type"  : MGLStyleValueTypeColor,
                                                @"value" : [UIColor purpleColor] } };

[self setStyleDescription:buildingStyle forLayer:@"buildings" inClass:@"default"];

Developing in the open

All of the development of Mapbox GL, from the current code to architecture discussions by our team and our ecosystem of contributors, will happen in the open on GitHub.

Here are a few ways we're looking to improve in the immediate future — direct links to GitHub where the development will happen:

A starting point

We've been hard at work on Mapbox GL and the technologies involved throughout our systems, but this is only the beginning. And we would love to have you help us decide where the renderer should go and what sorts of new applications should be made possible by it. Hit us up on GitHub or on Twitter @Mapbox and help us make the future of immersive, interactive maps.

World Cup Stadiums from Space

$
0
0

We are updating our satellite imagery with the 12 newly built soccer stadiums across Brazil just in time for the start of the FIFA World Cup next week. Below is a side by side showing the recent imagery captured by Digital Globe’s GeoEye1 and WorldView 2 satellites. We are currently selecting and processing the most recent, cloudless and highest resolution image available. This is a preview of the update.

All images are open for tracing in OpenStreetMap, so go ahead and use the custom layer URL on OpenStreetMap’s editor to start tracing the new maps.

1. Estádio do Maracanã, Rio de Janeiro

Date: March 12, 2014

Trace this layer on OSM: http://{switch:a,b,c}.tiles.mapbox.com/v3/brunosan.Rio/{z}/{x}/{y}.png

2. Arena Fonte Nova, Salvador (BA)

Date: April 22, 2014

Trace this layer on OSM: http://{switch:a,b,c}.tiles.mapbox.com/v3/brunosan.salvador/{z}/{x}/{y}.png

3. Arena Pernambuco, Recife (PE)

Date: February 16, 2014

Trace this layer on OSM: http://{switch:a,b,c}.tiles.mapbox.com/v3/brunosan.recife/{z}/{x}/{y}.png

4. Estádio Beira-Rio, Porto Alegre (RS)

Date: May 29, 2014

Trace this layer on OSM: http://{switch:a,b,c}.tiles.mapbox.com/v3/brunosan.porto/{z}/{x}/{y}.png

5. Arena das Dunas, Natal (RN)

Date: April 19, 2014

Trace this layer on OSM: http://{switch:a,b,c}.tiles.mapbox.com/v3/brunosan.natal/{z}/{x}/{y}.png

6. Estádio Nacional, Brasilia (DF)

Date: Jan 5, 2014

Trace this layer on OSM: http://{switch:a,b,c}.tiles.mapbox.com/v3/brunosan.brasilia/{z}/{x}/{y}.png

7. Estádio Mineirão, Belo Horizonte (MG)

Date: April 27, 2013

Trace this layer on OSM: http://{switch:a,b,c}.tiles.mapbox.com/v3/brunosan.belo/{z}/{x}/{y}.png

8. Arena Pantanal, Cuiabá (MT)

Date: April 4, 2014

Trace this layer on OSM: http://{switch:a,b,c}.tiles.mapbox.com/v3/brunosan.cuiaba/{z}/{x}/{y}.png

9. Arena de São Paulo, São Paulo

Date: May 5, 2014

Trace this layer on OSM: http://{switch:a,b,c}.tiles.mapbox.com/v3/brunosan.saopablo/{z}/{x}/{y}.png

10. Estádio Castelão, Fortaleza (CE)

Date: March 27, 2014

Trace this layer on OSM: http://{switch:a,b,c}.tiles.mapbox.com/v3/brunosan.fortaleza/{z}/{x}/{y}.png

11. Arena da Amazônia, Manaus, (AM)

Date: April 14, 2014

Trace this layer on OSM: http://{switch:a,b,c}.tiles.mapbox.com/v3/brunosan.manaus/{z}/{x}/{y}.png

12. Arena da Baixada, Curitiba, (PR)

Date: May 2, 2014

Trace this layer on OSM: http://{switch:a,b,c}.tiles.mapbox.com/v3/brunosan.curitiba/{z}/{x}/{y}.png

Amy Lee Walton Joins Mapbox

$
0
0

Designer Amy Lee Walton joins the team in DC! She brings an experimental art+maker background alongside seasoned web chops ranging from designing interactive experiences, to metrics-based advertising campaigns, to UX-focused landing pages and microsites.

She recently graduated from the Maryland Institute College of Art (MICA) with a Masters in Fine Art in graphic design. There Amy Lee mixed graphic design fundamentals with physical computing, creative generative coding, and digital fabrication. She’s interested in designing responsive, dynamic, and social methods of making and sharing.

While interning at architecture firm Gensler, Amy Lee designed a community mapping system in TileMill with web, mobile, and print options. At Mapbox she’ll be working with TileMill 2 to design custom maps and push the limits of map design.

Simple Editing for Turn Restrictions in OpenStreetMap

$
0
0

The new version of iD, the web editor for OpenStreetMap, makes it even simpler to add turn restrictions to OpenStreetMap.

Click an intersection where you want to add a turn restriction, select the portion of the road entering the turn, and click the icons to toggle between unrestricted and restricted turns.

Turn restrictions are traffic rules imposed by signage such as no-right-turn signs. Good turn restriction data is vital for solid directions services.

We’ve used this feature to add nearly 100 turn restrictions in San Francisco to OpenStreetMap, making our Smart Directions more accurate and improving the map for everyone.

Picture: Amanda Halprin


Drawing Antialiased Lines with OpenGL

$
0
0

Maps are mostly made up of lines, as well as the occasional polygon thrown in. Unfortunately, drawing lines is a weak point of OpenGL. The GL_LINES drawing mode is limited: it does not support line joins, line caps, non-integer line widths, widths greater than 10px, or varying widths in a single pass. Given these limitations, it’s unsuitable for the line work necessary for high-quality maps. Here’s an example of GL_LINES:

native-miters

Additionally, OpenGL’s antialiasing (multisample antialiasing) is not reliably present on all devices, and generally is of poor quality anyway.

As an alternative to native lines, we can tessellate the line to polygons and draw it as a shape. A few months ago, I investigatedvariousapproaches to line rendering and experimented with one that draws six triangles per line:

six-triangles

Two pairs of triangles form a quadrilateral gradient on each sides, and a quadrilateral in the middle makes up the actual line. The gradients provide antialiasing, so that the line fades out at the edges. When scaled down, this produces high quality lines:

six-triangles-antialiasing

Unfortunately, generating six triangles per line segment means generating eight vertices per line segment, which requires a lot of memory. I worked on an experiment that uses only two vertices per line segment, but this way of drawing lines requires three draw calls per line. To maintain a good framerate we need to minimize the number of draw calls per frame.

Attribute interpolation to the rescue

OpenGL’s drawing works in two stages. First, a list of vertices is passed to the vertex shader. The vertex shader is basically a small function that transforms every vertex (in the model coordinate system) to a new position (the screen coordinate system), so that you can reuse the same array of vertices for every frame, but still do things like rotate, translate, or scale the objects.

Three consecutive vertices form a triangle. All pixels in that area are then processed by the fragment shader, also called the pixel shader. While the vertex shader is run once for every vertex in the source array, the fragment shader is run once for every pixel in a triangle to decide what color to assign to that pixel. In the simplest case, it might assign a constant color, like this:

void main() {
    gl_FragColor = vec4(0, 0, 0, 1);
}

The color order is RGBA, so this example renders all fragments as opaque black. If we rendered lines by creating polygons from those lines, and assign a constant color to all pixels in that polygon, we’d still have horribly aliased lines. We need a way to decrease the alpha value from 1 to 0 as the pixels approach the polygon’s border. When transforming vertices in the vertex shader, OpenGL allows us to assign attributes to every vertex, for example:

attributes

These attributes are then passed on to the pixel shader. The interesting part is this: since a pixel can’t be directly associated with a single vertex, the attributes are interpolated between three discrete values according to the pixel’s distance to the three vertices that make up the triangle:

attributes-interpolated

This interpolation produces gradients between the vertices. This is the basis for the line drawing method I’m going to describe.

Requirements

When drawing lines, we have a couple of requirements:

  • Variable line width: We want to change the line width in every frame we draw so that when the user zooms in/out, we don’t have to tessellate the line to triangles over and over again. This means that the final vertex position must be calculated in the vertex shader at render time and not when we set up the scene.
  • End caps (butt, round, square): This describes how the ends of lines are drawn.
  • Line joins (miter, round, bevel): This describes how joints between two lines are drawn.
  • Multiple lines: For performance reasons, we want lines with varying widths and colors in one draw call.

Line Tessellation

Since we want to change the line width dynamically, we cannot perform the complete tessellation at setup time. Instead, we repeat the same vertex twice, so that for a line segment, we end up with four vertices (marked 1-4) in our array:

extrusion-source

In addition, we calculate the normal unit vector for the line segment and assign it to every vertex, with the first vertex getting a positive unit vector and the second a negative unit vector. The unit vectors are the small arrows you see in this picture:

extrusion-target

In our vertex shader, we can now adjust the line width at render time by multiplying the vertex’s unit vector with the line width set for that draw call, and end up with two triangles, visualized in this picture by the red dotted line.

The vertex shader looks something like this:

attribute vec2 a_pos;
attribute vec2 a_normal;

uniform float u_linewidth;
uniform mat4 u_mv_matrix;
uniform mat4 u_p_matrix;

void main() {
    vec4 delta = vec4(a_normal * u_linewidth, 0, 0);
    vec4 pos = u_mv_matrix * vec4(a_pos, 0, 1);
    gl_Position = u_p_matrix * (pos + delta);
}

In the main function, we multiply the normal unit vector with the line width to scale it to the actual line width. The correct vertex position (in screen space) is determined by multiplying it with the model/view matrix. Afterward, we add the extrusion vector so that the line width is independent of any model/view scaling. Finally, we multiply by the projection matrix to get the vertex position in projection space (in our case, we use a parallel projection so there is not much going on, except for scaling the screen space coordinates to the range of 0..1).

Antialiasing

We now have line segments of arbitrary width, but we still don’t have antialiased lines. To achieve the antialiasing effect, we’re going to use the normal unit vectors, but this time in the pixel shader. In the vertex shader, we just pass through the normal unit vectors to the pixel shader. Now, OpenGL interpolates between both normals so that the calculated vector we receive in the pixel shader is a gradient between the two unit vectors. This means they are no longer unit vectors, since their length is less than one. When we calculate the length of the vector, we get the perpendicular distance of that pixel from the original line segment, in the range of 0..1. We can use this distance to calculate the pixel’s alpha value. If we factor in the line width, we just assign the opaque color to all distances that are within the line width, minus a “feather” distance (see image below). Between linewidth - feather and linewidth + feather, we assign alpha values between one and zero, and to all fragments that are further than the unit vector away from the line, we assign an alpha value of zero (right now, there are no pixels that fulfill that property, but we’ll encounter them soon).

feather

Apart from the line width, we can also vary the feather distance to get blurred lines, or shadows. We can reduce it to zero to have aliased lines. A feather value of 0.5 produces regular antialiasing that looks very similar to what Agg produces. A feather value between 0 and 0.5 produces results mimicking Mapnik’s gamma value.

Line Joins

The technique above works for singular line segments, but in most cases we’re drawing lines composed of several segments joined together. When joining line segments, we have to choose a line join style and move the vertices accordingly:

overlong-unit-vectors

Earlier we calculated the normal of the line segment and assigned that to the vertex. This no longer works in the case of line joins because we actually need to calculate a per-vertex normal, rather than a per line segment normal. The per-vertex normal is the angle bisector normal of the two line segment normals.

Unit vectors for line joins also don’t work because the distance of the vertex from the line at join locations is actually further away than one. So rather than using the angle bisector unit vector, we just add the line segment unit vectors, which results in a vector that is neither a unit vector nor a normal. I call it an extrusion vector.

Unfortunately, we now have another problem: extrusion vectors are no longer normal to the line segment, so interpolation between two of them will not yield a perpendicular distance. Instead, we introduce another per-vertex attribute, the texture normal. This is a value of either 1 or -1, depending on whether the normal points up or down. This is of course not an actual normal because it has no orientation in 2D space, but it’s sufficient for achieving interpolation between 1 and -1 to get our line distance values for the antialiasing.

Since we don’t want to introduce yet another byte, of which we’d effectively use only a single sign bit, we encode the texture normal into the actual vertex attribute. The vertex attributes use 16-bit integers (-32768..32767) that are big enough to hold our typical vector tile coordinates of 0..4095. We double each coordinate (0..8190) and then use the least significant bit to store the texture normal. In the vertex shader, we extract that bit and use the model/view matrix to scale our coordinates down to the actual size.

To save memory, we encode the extrusion vectors with one byte per axis, so we have an (integer) range of -128..127 for every axis. Unfortunately, extrusion vectors can grow arbitrarily long for line joins because the extrusion vector length grows to infinity as the angle gets more acute. This is a common problem when drawing line joins, and the solution is to introduce a “miter limit”. If the extrusion vector gets longer than the miter limit, the line join is switched to a bevel join. This allows us to scale up the floating point normal dramatically so that we retain enough angle precision for the extrusion vector.

Mapbox GL

Look for future blog posts as we talk about more of the design and engineering work that has gone into Mapbox GL. Lines are just a small but necessary part of the bigger picture of what goes into high-quality custom maps rendered in realtime on the device.

40 cm satellite imagery starts today

$
0
0

Starting this morning, DigitalGlobe has new permission from the government to sell satellite imagery at 40 cm (16 inch) resolution, up from 50 cm (20 inches). The limit will drop further to 25 cm (10 inches) later this summer, once they’ve launched WorldView-3, which will be the first private satellite technically capable of that resolution. The numbers don’t tell the story, though—let’s look at some pictures.

If going from 50 cm resolution to 40 cm resolution sounds like a small change at first, remember that we’re talking about square pixels. When square A is only ¼ longer on a side than square B, it contains more than 150% as much area. Therefore, a slightly smaller linear size means a lot more clarity. I’ve taken some aerial imagery of San Francisco’s Golden Gate Park and resampled it to demonstrate:

50

This is 50 cm imagery, the standard as of yesterday. We’re looking at the California Academy of Sciences building at right, with its distinctive green roof.

40

And here’s 40 cm imagery. With more than half again as much information, we’ve gone from seeing the crowd in front of the museum to seeing their shirt colors. Individual shrubs start to appear, and we can read more road markings—a classic index of sharpness.

This is great news for everyone who uses satellite imagery, but I’d like to highlight two particular points. One is that this shows the US government is shifting out of its post–Cold War mindset of strictly controlling access to commercial imagery. If tight resolution limits made sense two decades ago, they don’t anymore, and regulators are changing with the times. The second is that this isn’t just about San Francisco, NYC, Paris, and other metropolises. Many of them already have satisfactory aerial imagery. This is about having ultra-high-res imagery, and especially series of ultra-high-res imagery over time, of anywhere in the world.

Walk to talk imagery? Say hi on Twitter!

Map Label Placement in Mapbox GL

$
0
0

Well-placed labels can be the difference between a sloppy map and a beautiful one. Labels need to clearly identify features without obscuring the map.

The normal requirements for map labelling are to place labels as clearly as possible without any overlap. Regular maps just need to avoid label overlap for a single, fixed zoom level and rotation.

Good label placement is a hard problem that we solve for Mapbox GL. We need our label placements to work at any zoom and any rotation. We need labels placements to be continuous, so that labels don’t jump around when zooming or rotating. We need labels to be seamless across tiles. We need to support changing font sizes as you zoom. And it all needs to be fast.

Placement needs to support both horizontal labels, as well as curved labels which follow a line. Both types of labels need to behave smoothly when zooming and rotating. Labels can never overlap, even when rotating. Horizontal labels stay horizontal and curved labels rotate with the map. Labels are flipped to avoid being drawn upside down and curved labels smoothly slide along roads.

There is plenty of academic research on label placement, but most of this applies to maps with fixed rotation and with separate zoom levels. Dynamic maps with continuous zooming, panning, and rotation need a completely different approach. Our implementation expands on a paper by Been and Yap that establishes four ideal requirements for continuous and interactive labelling:

  1. Labels should not disappear when zooming in or appear when zooming out.
  2. Labels should not disappear or appear when panning except when sliding out of view.
  3. Labels should not jump around, but instead should be anchored.
  4. Label placement should be deterministic, no matter how you got to the current view.

The paper provides guidance on implementing this for horizontal labels, but we go further by supporting rotation and curved labels.

Our implementation has two steps:

  1. Preprocessing
  2. Rendering

The rendering step needs to be fast so that Mapbox GL can rerender the entire map every frame for smooth interaction. Most of the placement work happens in the preprocessing step:

  1. Generate anchor points for each label.
  2. Calculate the positions of individual glyphs relative to the anchors.
  3. Calculate the zoom levels at which the labels and glyphs can be shown without overlap.
  4. Calculate the rotation range in which the label can be shown.

Generating anchor points

Each label has an anchor. An anchor is the point at which a label is positioned when zooming or rotating.

Labels for point features have a single anchor, the point.

For lines, we want to show multiple labels so we interpolate along the line adding an anchor every x pixels. Distance between labels changes when zooming, so we add a minimum zoom level for each anchor to maintain appropriate spacing. Fewer labels are shown at lower zoom levels and more appear as you zoom in.

Generating positioned glyphs for each anchor

For each piece of text we already have a list of glyphs and their positions, but these positions need to be adjusted for curved labels.

During the render step we can only shift glyphs along a straight line. To draw curved text we need to add multiple copies of glyphs — one for each line segment a glyph appears on. Each of these glyphs have minimum and maximum zoom levels that hide the glyph when it slides off the end of a segment so that only one instance of each original glyph is shown at the same time.

Usually these glyphs are completely hidden when out of range, but here they are shown with a reduced opacity:

Restricting the zoom range

To avoid label collisions, we need to restrict the zoom level at which a label is first shown. As you zoom in, labels get spaced further apart, opening room for new labels. Once a label is shown, it will not be hidden as you zoom in.

We use an R-tree that contains already-placed labels to narrow down which labels might collide. We then calculate the zoom level at which the two labels will fit side-by-side. It is safe to show the label for any zoom level higher than this one.

Restricting the rotation range

The next step is calculating how far the label can be rotated before it collides with other labels. There are two types of collisions: a curved label colliding with a horizontal label, and a horizontal label colliding with a horizontal label.

Horizontal-horizontal collision

There are eight possible angles at which a pair of horizontal labels could collide. Each of these possible collisions is checked with some trigonometry.

Curved-horizontal rotational collisions

A curved-horizontal collision occurs when a corner of one label’s bounding box intersects an edge of the other label’s bounding box. For each of the eight bounding box corners, we calculate the angles at which a circle (formed by that point being rotated around the label’s anchor) intersects the edges of the other box. These are the angles at which a collision would begin and end.

Seamlessness

Mapbox GL downloads vector tiles with data for the area and zoom level it is currently displaying. When new tiles are downloaded and their labels have been placed, an old tile’s label may need to be hidden to make way for a more important label. This will be handled in a resolution step that has not yet been implemented.

Mapbox GL

Look for future blog posts as we talk about more of the design and engineering work that has gone into Mapbox GL. Labels are just a small but necessary part of the bigger picture of what goes into high-quality custom maps rendered in realtime on the device.

Adding San Francisco Buildings on OpenStreetMap

$
0
0

Our data team has been busy tracing building footprints in San Francisco. The effort is not finalized but so far we have spent many hours tracing, improving and validating over 114,000 building footprints. Here is an animation of the progress we have made.

animated gif showing building progress

Mapping buildings — progress in San Francisco.

All buildings are traced by hand from a combination of satellite imagery and the official San Francisco building footprint layer. While the building footprint layer is too inaccurate for an import, it serves as a great reference layer to guide tracing. Our topmost priority is quality. We have taken particular care to respect existing edits, reaching out to individual mappers on the ground where possible and improving existing geometries where we found clear opportunities.

This is an ongoing effort, join us tracing, or post feedback on Github— we’d love to hear from you.

Drawing Text with Signed Distance Fields in Mapbox GL

$
0
0

Last week, Ansis explained how labels are placed in Mapbox GL, but once we know where to place labels, we still have to figure out how to draw them.

Even in 2014 after over two decades of OpenGL, rendering text is not easy since OpenGL can only draw triangles and lines. Rendering text for maps is even harder because we need the same glyphs in many different sizes, and the text placement changes every single frame when the user is rotating the map. In addition, we need to paint text halos for better contrast.

properties

Using a glyph atlas

The go-to open source library for rasterizing text is FreeType. Rendering text typically works by having FreeType generate a monochrome bitmap of the glyph in a temporary buffer, then blending the buffer pixel-by-pixel to the correct position on the destination bitmap. This means that we could pre-render all of the glyphs we need into a shared texture, called a texture atlas, then create two triangles (a quad) per glyph that map to the texture. This approach is also implemented by Nicolas Rougier’sfreetype-gl.

glyph-drawing

This works nicely until you start to rotate the text. While OpenGL’s linear interpolation is decent, it still looks rather blurry, so just rotating the glyph quads doesn’t work for us. We could regenerate the glyphs whenever a user rotates the map, but that slows down map rendering because we need to make a lot of CPU calculations in every frame and upload new texture and vertex data. This all means that we need to look for a different approach.

Signed Distance Fields

Distance fields (or distance transforms) have been around for ages and have lots of useful properties. In a distance field, every pixel indicates the distance to the closest “element”. Valve introduced the approach of using distance fields for rendering sharp decals in computer games a couple of years ago. And we decided to do just that when rendering glyphs as well.

To render text with signed distance fields, we create a glyph texture at font size 24 that stores the distance to the next outline in every pixel, rather than the actual value itself:

opensans-regular

Inside of a glyph, the distance is negative; outside it’s positive. As an additional optimization, to fit into a one-byte unsigned integer, we’re shifting everything so that values between 192 and 255 indicate “inside” a glyph and values from 0 to 191 indicate outside, plus we clamp the overflowing values. This gives the appearance above of a range of values from black (0) to white (255). In essence, we are using the pixel color values in the texture as a measure of distance from glyph edges.

Like in the previous technique, we create two triangles to form a quad and assign the corresponding texture coordinates so that the distance map of that glyph gets mapped onto that rectangle.

We enable OpenGL’s linear interpolation so that we get a smoothly scaled image. Then, the important part is the alpha test. Depending on how far we want to buffer the glyph, we choose a cutoff value and assign 1 as the alpha value to all pixels that are within the glyph outline and 0 to the ones outside. To get an antialiased look, we’re creating a small alpha gradient around the cutoff value with the smoothstep function. The entire pixel shader looks like this:

precision mediump float;

uniform sampler2D u_texture;
uniform vec4 u_color;
uniform float u_buffer;
uniform float u_gamma;

varying vec2 v_texcoord;

void main() {
    float dist = texture2D(u_texture, v_texcoord).r;
    float alpha = smoothstep(u_buffer - u_gamma, u_buffer + u_gamma, dist);
    gl_FragColor = vec4(u_color.rgb, alpha * u_color.a);
}

You can try text rendering at this demo.

Using signed distance fields for font rendering has a few advantages:

  • Free accurate halos by simply changing the alpha testing threshold.
  • Arbitrary text rotation.
  • Arbitrary text size, though it starts looking a bit off at very large text sizes.
  • A bitmap of a 24px glyph is about 20% smaller than the vector representation of that glyph.

There are a few minor drawbacks too:

  • Text appears a little more rounded.
  • No support for font hinting.

Font hinting changes the glyph outlines so that they fit better in a pixel grid, which is especially useful when rendering small text. However, FreeType disables hinting anyway as soon as you rotate a glyph with a transformation matrix. Additionally, many of our maps are being displayed on very high density (high-DPI or “retina”) screens built into smartphones or tablets, so hinting is much less important on these screens.

Viewing all 2230 articles
Browse latest View live