Our team, inside the garage, watch today's Washington Post: The New Cartographers.
Washington Post: The New Cartographers
Girl Power @MapBox
I've been on the leadership team at MapBox since day one. I literally filed our incorporation papers, found and built out our office, and opened our first bank account. Now we grow. Today I'm running new Q4 projections, designing partnership contracts, and doing my weekly review of our systems and processes to make them smarter so we can do our best work. The last thing I'll do today is review applicants for our Business Development Lead in San Francisco.
We've grown fast at MapBox, and we've done that by hiring the hungriest people out there who believe in changing everything - including what it means to "make a map". The diversity of our team has made our team stronger, our tools better, and our approaches smarter and more innovative.
We want more women to be a part of this. We're going to continue to bring on the most passionate people out there to our team, and I believe that women make up half of that group. But they don't make up half of our responses to job posts yet. Let's change this. I'm @bonnie on Twitter - reach out to me. This is going to be an amazing ride, we're just getting started.
500px Adds MapBox Maps
500px, the popular photo sharing website, just added MapBox maps to its gorgeous layout. Now, when a user views a specific photo, they have the option of seeing where the photo was taken. 500px leveraged both the static API and MapBox.js. Take a look around the website and see how nicely the maps fit.
Bruno Sánchez-Andrade Nuño joins MapBox as Chief Scientist
Bruno Sánchez-Andrade Nuño is joining MapBox as Chief Scientist. Bruno has a PhD in Astrophysics from the Max Planck Research School and has worked with NASA satellites and space rockets at the U.S. Naval Research Laboratory, and on Science and Technology Policy at the National Academies. He's perfectly targeted to work on both the satellite team with Charlie and I and to be a key player on several of our products.
Bruno is a long time advocate of open data and open source. He recently created an open index to measure climate change adaptation. He brings a strong passion and commitment to expand the impact of remote sensing and open mapping to benefit society at large. Welcome!
New Map Features on 500px
Since our post yesterday, 500px has rolled out new map features. If you now click on a photo's map you'll see everyone's photo in the vicinity. You'll find the maps by clicking on the "Location" link under photos like this one. We think the photo markers are really beautifully done and we're excited to see more mapping goodness coming from 500px soon.
A First Look at Improving OpenStreetMap With New TIGER data
Improving OpenStreetMap with the latest TIGER data
Much of the United States in OpenStreetMap is based on US Census Bureau TIGER 2006 data. While this data has been edited and improved in OpenStreetMap over the past several years, TIGER is also dramatically more accurate and complete now in many places than it was in 2006.
We are starting to incorporate these improvements to TIGER into OpenStreetMap by designing ways to identify where TIGER has been changed and OpenStreetMap hasn't and exposing those areas for editing in OpenStreetMap's iD editor.
This is a screenshot showing a development version of the iD map editor with a layer of TIGER 2012 changes that don't also appear in OpenStreetMap (yellow) and obsolete TIGER data that is still in OpenStreetMap (blue). Mappers can go in and focus on these areas, improving data in OpenStreetMap accordingly. The orange layer is tweet locations that we are using to guide priority areas to map. This layer is similar to our work with Gnip, but sourced from the public Twitter gardenhose instead. The tweets can't be traced into OpenStreetMap, but they are a good indicator of the most frequently visited places.
We're busy right now making the map update dynamically from OpenStreetMap. Stay tuned for updates.
Congress for iOS by Sunlight Foundation Q&A
Our friends at Sunlight Foundation recently released the iOS version of their popular Congress app. This free app lets you find out more about members of Congress, watch bills work their way through both chambers, and explore congressional districts using MapBox maps. It uses their open Congress API and the app itself is even open source on GitHub.
We got in touch with Jeremy Carbaugh and Daniel Cloud, two of the folks at Sunlight who helped develop the app, to find out more.
MapBox: Tell us about the app and why you made it.
Sunlight: Congress for iOS gives you access to the latest information from Washington like bills, votes, legislators and more. We launched a very successful version of Congress for Android four years ago and are finally catching up on iOS. We feel that the general public has to have easy access to information if they want to be involved in the governing of our country. It should be dead simple to find the legislators that represent you, know what they are working on, and get in touch with them to have your opinions heard.
Lobbyists and other influencers in Washington have access to expensive tools that let them know what's happening on Capitol Hill. This often puts citizens and smaller groups at a disadvantage when it comes to having an impact on policy. We try to make tools that anyone can use and level the playing field. Congress for iOS is one part of that mission.
MapBox: Maps are featured prominently in the app. What were your design goals with using a custom map instead of the iOS default?
Sunlight: Our goal for maps in Congress for iOS was to make it easy for users to examine legislator districts and geolocate their legislators. Maps, especially district maps, can convey a lot of information to a person that is familiar with an area. How large is the district? Is it a really weird shape that could indicate gerrymandering? Are people of various socio-economic statuses included in the district or is it homogenous?
"TileMill is an invaluable resource that lets us design maps without having to write code."
Sunlight has used MapBox in our web projects, so we were familiar with the features and customizability of MapBox maps. Since we were going to overlay district shapes, it was important that the map be clean and legible, and we knew we could control that with MapBox maps. TileMill is an invaluable resource that lets us design maps without having to write code. It's also great that we can use the same underlying map tiles across platforms, making our design consistent on iOS, Android, and the web.
MapBox: What were the good parts of using our SDK and what were the hard parts? Of our service? API?
Sunlight: It was easy to start using the SDK, especially since it is available with CocoaPods. Between the support forums and the example code on the SDK site, most of the questions we had were covered. There were a few instances where I had to dig through the source code to find the order in which methods were called or how certain objects were used under the hood, but it didn't take too much work to figure out. Fortunately, we had access to the code since the SDK is open source!
We did consider rendering the district Shapefiles into the tiles themselves, but we didn't have the stamina to manually export and upload over 500 individual tilesets. It would be great to have an API so that much of that work could be done programmatically. It would make for much faster and smoother maps, though we are still quite happy with the result using RMPolygonAnnotation.
The accessibilty of Congress for iOS is very important to us so we were really happy with how easy it was to make the MapBox map work with VoiceOver. It was just two simple lines of code:
[_mapViewsetIsAccessibilityElement:YES];[_mapViewsetAccessibilityLabel:[NSStringstringWithFormat:@"Map of %@, district %@",stateName,district]];
When the map is read by VoiceOver, it tells the user the state and district that are being displayed.
Thanks again to Jeremy and Daniel at Sunlight for taking the time to talk with us. You can find more info about Congress for iOS at congress.sunlightfoundation.com and you can follow the app on Twitter at @congress_app. You can also download the source code to use as a starting point for your own app.
Have a great app that you've built using MapBox? Get in touch with us or feel free to message me directly on Twitter or App Dot Net.
ArcGIS + MapBox Sitting in a Tree... Thanks to Arc2Earth
ArcGIS, and the larger Esri suite can be a used in powerful ways with MapBox. All that is needed is the Arc2Earth Sync plugin to bring your desktop GIS to the cloud with MapBox.
With Arc2Earth Sync you can connect ArcGIS to TileMill for high end cartography, direct one click ArcGIS to MapBox publishing to the cloud, and create MBTiles from within ArcGIS.
You can read more about MapBox for GIS professionals or just sign up with a free month of Arc2Earth Sync and get a Standard account on MapBox.com free for one month. In addition, everyone that signs up for Arc2Earth in the next week gets one of these temporary tattoos, just tweet @arc2earth@esri + @mapbox.
Happy Hour Thursday @Bloodhound in San Francisco
We're kicking off August with beers this Thursday, August 1st, at 6:30pm @Bloodhound, our favorite bar by our office in San Francisco. Expecting to see a lot of good mapping friends from @CodeforAmerica, EsriSan Francisco, and team #geopork. Come drink! + look for @jfire, @enf, and @ericg
Processing RapidEye Imagery in Minutes
RapidEye has high-res, extremely up-to-date satellite data. Their satellite constellation provides daily images of anywhere in the world with 5 meter resolution, making it especially useful for some of our agricultural and industrial subscribers. And this beautiful imagery is easy to work with: you can go from a data delivery to a rendered map in just a few steps, using all free software.
Here’s how to take data directly from a RapidEye download, through processing, and into a cloud-published map in under 10 minutes. To follow along, you’ll want a recent version of GDAL and a copy of ImageMagick with tiff support. Both are in most Unix package systems, or you can get them from their project pages.
RapidEye imagery comes already georeferenced and corrected for topography – at level 3A, in remote sensing jargon. The delivery will have assorted metadata files, a small preview (“browse”) image, and a large geotiff with the data payload. The geotiff contains 5 spectral bands, the first three of which are visible blue, green, and red. We can make an ordinary RGB image and reproject it right away (your input tiff’s name will vary, of course):
gdal_translate -b 3 -b 2 -b 1 \
1155010_2013-06-13_RE1_3A_166305.tif rgb.tif
gdalwarp -co photometric=RGB -co tfw=yes -t_srs EPSG:3857 \
rgb.tif rgb-proj.tif
The -b
flags tell GDAL which bands to pull out of the source image, and -co photometric=RGB
means they’ll be interpreted as red, green, and blue respectively in the output. (We’ll cover the -co tfw=yes
shortly.) As you may have seen before if you’ve worked with other satellite data, the raw image is dark and muddy:
But that’s only because it has to represent a huge range of information. The bright, natural-looking image is in there, it’s just hiding. Add lightness and contrast:
convert -sigmoidal-contrast 30x15% -depth 8 \
rgb-proj.tif rgb-proj-bright.tif
You’ll see some warnings as convert
skips the georeference tags, and you may find that a different brightness/contrast mix (say, 10x20%) works better for your scene. We also drop down to 8-bit color now that we’re done with processing. And presto! We’ve taken a RapidEye download to a true-color picture:
To import this into TileMill, take the .tfw file that the gdalwarp
step created (that was the -co tfw=yes
) and rename it to match the final image, then use GDAL one last time to bundle the adjusted image data and the geographical information into a geotiff:
cp rgb-proj.tfw rgb-proj-bright.tfw
gdal_translate rgb-proj-bright.tif RapidEye-ready-for-mapping.tif
TileMill will now happily open it as SRS 900913. I assembled some sample images into a larger scene using this process:
If you head over to the southwest corner of this map, you’ll see turquoise specks in the residential areas. The color seemed so out of place that I was worried I’d misprocessed the imagery. I’d looked at this part of the world – the suburbs of Los Angeles – while working on Landsat 8 processing, and certainly hadn’t noticed any neighborhoods full of houses with light blue roofs. Then I put two and two together: those are swimming pools! You simply can’t see them at Landsat 8’s resolution:
Porter Ranch, a neighborhood of Los Angeles, in late June 2013. Left: Landsat 8 data (courtesy of USGS) at 15 m/px. Right: RapidEye data, 5 m/px.
For every Landsat 8 pixel, even after pansharpening, RapidEye delivers nine. But it’s not just spatial resolution that matters – there’s also temporal resolution, or the time between successive images. For Landsat 8, that’s 16 days, and usually the trade-off is slower revisits as resolution increases. But of course, with five identical satellites and the ability to aim, RapidEye can deliver daily.
A Powerful Tool for Agriculture
Among many other applications, this is a powerful tool for large-scale agriculture. A big farm operation might have crops planted further than the eye can see, and very small changes out in the field (such as how the plants are responding to irrigation, how fast they’re ripening, or whether they’re showing indications of disease) can be vital to discover as soon as possible. The RapidEye sensors offer another unusual service to agriculture: a red edge band, between red and conventional near-infrared (NIR). This slice of the spectrum is even more sensitive than NIR to differences in vegetation – between healthy and unhealthy, but also between different varieties, like trees v. ground crops.
To get a sense of what the red-edge band can show, let’s construct a false-color image using it. In the delivered geotiff, spectral bands are numbered in order from shortest wavelength (band 1 is blue) to longest (band 5 is near-infrared). To make an image with red-edge as red, red as green, and blue as blue:
gdal_translate -b 4 -b 3 -b 1 -co photometric=RGB \
1155010_2013-06-13_RE1_3A_166305.tif 431.tif
With -sigmoidal-contrast 40x14%
for clarity, this shows urban areas and bare land in blues and grays, while agriculture and natural vegetation are in reds and yellows:
We can even dabble in band math, using imagemagick’s -fx
operator. It’s not the fastest tool, but its sheer flexibility is hard to beat. Let’s look at red-edge NDVI – an index that highlights leafy, healthy plants. The -fx
operator can be a little finicky, and it works best on images that ImageMagick itself constructed, so first run the 4-3-1 image through convert
:
convert 431.tif 431-prepared.tif
The formula for NDVI is (NIR − red) ÷ (NIR + red). -fx
will find NIR (in this case red-edge, or very near infrared) in the red channel of the image, which it calls u.r
, and actual red in the green channel, or u.g
. I’m also throwing in a -monitor
, which you can add to any convert
command to track its progress:
convert -monitor 431-prepared.tif \
-fx '(u.r - u.g) / (u.r + u.g)' \
rededge-ndvi.tif
This gives us a grayscale image that, even without further processing, clearly shows where there are healthy plants. Zooming in on an area with crops, we can find very small variations in plant vitality within individual fields:
It only takes a couple minutes of processing to start finding this kind of insight. If that farm is your business, you know exactly where your water and other resources are best applied tomorrow morning, or even this afternoon. And RapidEye’s revisit capability can sustain fine-grained analysis over time, so you see not just points of data but curves and trends.
We’re building infrastructure for a future where this kind of frequent, high-quality imagery becomes a normal part of not only agriculture, but logistics, science, policy, journalism, and so on. If you’re interested, you can sign up for the MapBox Satellite Live beta, and as always you should say hi to me (@vruba), Chris(@hrwgc), or Bruno(@brunosan) on Twitter if you have questions or comments. We’re happy to put you in touch with our contacts at RapidEye if you’d like to start ordering imagery today and working with it in TileMill.
Connecting Communities: Edit OpenStreetMap Directly from Foursquare
Foursquare Superusers can now edit OpenStreetMap directly from their moderation interface.
Foursquare has always benefited by collecting location information from users who have a passion for accurate information on their check ins. These users can now also improve OpenStreetMap simply by clicking an "edit" button on the map.
Foursquare Superusers will find now an edit button on their maps.
The "edit" button will lead directly to OpenStreetMap's web editor at the right location, ready to go. When clicked for the first time, it will lead to a page introducing the user to OpenStreetMap, explaining the basics and encouraging them to create an account and start mapping.
Foursquare's OpenStreetMap introduction page
As of today, this feature is available to all Foursquare Superusers in the UK, Australia, Germany, and Brazil - this is how you can become a Superuser.
Connecting Communities
This feature is a big step towards further connecting communities of map users to OpenStreetMap. This is particularly exciting in the case of Foursquare where a thriving power user community is already taking on big responsibilities in keeping Foursquare's locations fresh. These are the same users that we saw roll up their sleeves and jump into OpenStreetMap soon after Foursquare switched over to the OpenStreetMap based MapBox Streets. It's great to tap further into this energy and better build out the integration between Foursquare and OpenStreetMap. We are planning on making connecting communities to OpenStreetMap even easier, follow our OpenStreetMap Development Blog for more details.
Super Sharp 50cm Pléiades Satellite Imagery on MapBox.com
Pléiades is a constellation of two identical high-res satellites 1A and 1B on the same heliosynchronous orbit on opposite sides. This basically means that Astrium, Europe’s leading space technology company, can capture any point on earth in less than 24 hours, always at similar illumination angles. Their imagery can produce a 20 km² ground footprint at nadir with 50cm resolution (2 meter Blue, Green, Red, Near Infrared and 0.5 meter panchromatic). Using open source tools you can publish this data on Mapbox in minutes. This is all part of our larger goal of making MapBox the number one satellite imagery publishing platform.
Any point on Earth, everyday, with 50cm resolution. With this guide you can go from image download to rendered maps in minutes, all with open software.
Astrium has set up an impressive service with Pléiades. Uplink stations can schedule acquisition up to 3 times per day. Satellites can rotate to maximize pointing opportunities with each pass. Downlink stations can collect up to 1 million km² per day, per satellite. The processing pipeline can then create calibrated and orthorectified images in just 30 minutes. With the instructions below, you can process those high-res images in minutes:
Zoom and pan around the map above, or follow a quick tour full screen. You are looking at Napoli (Italy) as it was taken on a sunny morning on February 14th, 2013 at 10:03am local time. The satellite is 700 km above ground and looking down and forward towards the city, as it is descending South on its orbit.
We're working with a brand new Pléiades product offered by Astrium: Pléiades Optimized Visual Rendering. This product is great for users who want a high-quality product that requires minimal further processing to turn into a beautiful map.
To import the image into TileMill we need to warp it into a Google Mercator projection. Pléiades offers TIF images ready for warping, but also JP2 images which have smaller file sizes but are not as widely supported across open source image processing tools. On this guide we will use a JP2 image, but we will need to split the original image into tiles, as the 3GB JP2 image is close to 8GB as a GeoTIFF. We are currently further optimizing this process, backed by a fully open source stack.
These are the basic steps of our pipeline:
- Install software dependencies
- Convert downloaded image from JP2 to GeoTIFF
- Split and warp into smaller Web Mercator tiles
- Color correct
- Map rendering with TileMill and upload to MapBox.com
Software Install
You might need this step if you have never followed any of our processing guides. All the tools we need are quick and free to download and use. Mostly GDAL, ImageMagick and TileMill.
This is what I used on Ubuntu (on Mac you can either brew install
or download binaries from their project page):
sudo apt-get install gdal-bin unzip jasper wget qgis s3cmd imagemagick libjasper-runtime eog tilemill
For Pléiades we also need Orfeo Toolbox to convert JP2 to TIF. Check out Orfeo's Installation Guide for full install instructions. If you're installing on Linux, you can use:
sudo apt-get install -y libotb otb-bin python-otb
Or, on a Mac, it's easy to install via homebrew:
brew install orfeo
Convert downloaded image from JP2 to GeoTIFF.
The Pléiades image we are using is already georeferenced and corrected for topography. It also uses the 0.5 panchomatic black&white image to pansharpen the color 2-m resolution image. There are a few utilities that support decoding JP2 files into TIFF images, but Orfeo Toolbox proved to be the best at preserving original geographic information and image quality.
Convert the image from JPEG2000 to GeoTIFF using Orfeo Toolbox's otbcli_ExtractROI
utility.
IMAGE=$(ls *.JP2)
otbcli_ExtractROI \
-in ${IMAGE}\
-out ${IMAGE%.JP2}.tif uint8 \
-ram 4096;
The produced image has four bands. Since we want RGB (the first three), we throw away the fourth band by creating a Virtual Raster.
IMAGE="IMG_PHR1A_PMS_201302141015025_ORT_644823101-001_R1C1.tif"
gdal_translate \
-b 1 \
-b 2 \
-b 3 \
-co PHOTOMETRIC=RGB \
-of VRT $IMAGE\
rgb.vrt
Split and warp into smaller Web Mercator tiles.
The Pléiades image dimensions are roughly 47000x41000 pixels. As a JP2 image, the image is just over 3 GB; as a GeoTIFF, however, the image is nearly 8 GB. Knowing the capabilities and limitations of GDAL's GeoTIFF support, we cut up the larger GeoTIFF into smaller tiles of more manageable sizes.
gdal_retile.py
is a convenient utility for dividing an image into smaller images, without having to compute particular spatial extents. With this utility, you use the -ps
flag to specify the output dimensions of the tiles, and gdal_retile.py
calculates the rest. Since we will still need to do further processing on the tiles, we make Virtual Rasters (VRT) rather than actual image tiles (which also saves time and disk space). However, gdal_retile.py
does not support creating VRTs -- the VRTs it creates will show up as fully black images if you try to load them in QGIS. This is fine for us, though, since we only want the VRTs to grab the tile bounding box. Thus we use 2> /dev/null
flag at the end to suppress error printing.
We want the tiles to be in our ultimate SRS, EPSG:3857
, to ensure we retain the seamless image across reprojection and tiling, and gdal_retile.py
retiles based on an image's native SRS, so our first step here is to create a Virtual Raster of the converted image in Web Mercator. A VRT here saves a lot of processing time.
gdalwarp \
-of VRT
-s_srs EPSG:32633 \
-t_srs EPSG:3857 \
rgb.vrt \
3857.vrt
Next we make VRT tiles at 8192x8192px dimensions of the VRT, which we use to create a tileindex shapefile using gdaltindex
. The shapefile will be used later to determine the tiles cut out from the original image.
mkdir -p tiles
gdal_retile.py \
-ps 8192 8192 \
-targetDir tiles
-of VRT \
-s_srs EPSG:3857 \
3857.vrt 2> /dev/null;
gdaltindex \
-t_srs EPSG:4326 \
tiles.shp \
tiles/*vrt;
Next we take advantage of gdalwarp
, wrapped within a small function specific to the Pléiades tiles. This function does a few things:
- Create a tile subset of the original RGB full-size image for each tile contained in the tile index file.
- Reproject the tile to Web Mercator:
-t_srs EPSG:3857
- Once we've warped the tiles to Web Mercator, we use
gdaladdo
to add overviews to make them faster and easier to work with in TileMill.
function warp(){x="$1"y="$2"
gdalwarp \
-q \
-s_srs EPSG:32633 \
-t_srs EPSG:3857 \
-cutline tiles.shp \
-cwhere "location = \"tiles/3857_${x}_${y}.vrt\""\
-crop_to_cutline \
rgb.vrt \
tiles/${x}_${y}.tif;
gdaladdo \
-q\
-r cubic \
tiles/${x}_${y}.vrt \
2 4 8 16;
}
Since our gdal_retile.py
step produced a 6x6 tiling scheme for the original image, we can set up a simple nested for loop
script to cycle through the creation and processing of each tile subset. We use the -cutline
and -cwhere
flag available with gdalwarp
to cut out the subset tiles based on our tile index shapefile. This step can be made significantly faster, depending on the amount of computing power, by having multiple tiles process in parallel.
for x in {1..6}; do for y in {1..6}; dowarp $x$y;
donedone
Lastly, we take advantage of VRTs once again, using gdalbuildvrt
to create a mosaic image of our finished tiles. Note that since we will bring this VRT into TileMill to render, it is important to supply absolute paths for the source tiles.
gdalbuildvrt \
-srcnodata "0 0 0"\
-vrtnodata "0 0 0"\
mosaic.vrt \$(pwd)/tiles/*.tif;
Finally, we obtain the right format, coordinates and projection.
Next step is to correct those dark flat colors.
Color correct.
We are going to use Image Magik to enhance the colors on the scence. The original image is sensor corrected, but the sensor has a much wider range than the current scence, and a different sensitivity profile than the eye. Thus, we are going to add brightness and contrast to the scence. In particular, after some iterations we decided to:
1- Increase the gamma. 30%: -modulate 100,130
2- Slightly decrease the Green and Blue channels -channel G -gamma 0.95 -channel B -gamma 0.875
3- Apply a contrast correction across the intensity histogram of all channels -channel RGB -sigmoidal-contrast 11x40%
convert -monitor -modulate 100,130 -channel G -gamma 0.95 -channel B -gamma 0.875 -channel RGB -sigmoidal-contrast 11x40% input.tif -compress lzw output.tif
Once we are set with our color correction function, we can apply it all of the tiles using a color_correct
function, which uses Virtual Raster files again to preserve the tile's original geometry information, which ImageMagick does not retain.
function color_correct(){x="$1"y="$2"
listgeo -tfw tiles/${x}_${y}.tif 2> /dev/null;
cp tiles/${x}_${y}.tfw color/${x}_${y}.tfw;
convert -modulate 100,130 -channel G -gamma 0.95 -channel B -gamma 0.875 -channel RGB -sigmoidal-contrast 11x40% tiles/${x}_${y}.tif -compress lzw color/${x}_${y}.tif 2> /dev/null;
gdal_translate \
-q \
-a_srs EPSG:3857 \
-a_nodata "0 0 0"\
-of VRT \$(pwd)/color/${x}_${y}.tif \
color/${x}_${y}.vrt;
gdaladdo \
-q\
-r cubic \
color/${x}_${y}.vrt \
2 4 8 16;
}
mkdir -p color
for x in {1..6}; do for y in {1..6}; docolor_correct $x$y;
donedone
Finally, build the VRT of the color-corrected tiles:
gdalbuildvrt \
color.vrt \$(pwd)/color/*.vrt;
Map rendering with TileMill and upload.
Importing into TileMill is now as easy as selecting the file and clicking on export to render and upload to your hosting account in MapBox. If you don´t have an account, you can make a free one here.
We love being able to integrate more imagery sources into our cloud publishing service. Make sure to check our processing tips for other leading imagery sources, like Landsat 8 or RapidEye (Or other planets!). If you have any other Satellite imagery you want to publish, let us know. Either from archives or fresh from space, we are trying to be the easiest fastest way to process, and publish imagery.
Ping us on Twitter with ideas and feedback: Chris(@hrwgc), Charlie(@vruba), or myself(@brunosan).
Financial Times Goes MapBox: Design Matters
The Financial Times' signature pink brand can now be found in its maps too. They completely nailed the new map design, integrating both the distinctive pink color scheme and their in-house font into their newly launched MapBox vector maps. More than just allowing brands like the Financial Times full custom control, this new vector tile technology does it at scale, letting the FT's global readership experience fast maps in every country, on any device, under any traffic load.
The new Financial Times maps in a recent analysis of the Syrian civil war.
This is best said by Martin Stabe, Financial Times journalist and interactive news team lead:
The new Financial Times maps are a key component to our expanding use of maps in the news. The beautiful custom design is an integral part of the experience on ft.com. The maps will be the canvas for everything from simple locator thumbnails to interactive news apps. The open standards of MapBox maps allow us to mix and match with other technologies depending on the project at hand allowing for the flexibility needed in a fast moving technology landscape.
From a simple pin orienting readers to a story to a base map for interactive data visualizations, the Financial Times' maps communicate the strong visual identity of the paper as a canvas for all things news.
Using a subdued color palette, only place labels and roads at higher zoom levels stand out, and even then in a subtle way. The land bodies inherit the warm, signature pink of the Financial Times - a color mixed into the map's entire palette. Together with the paper's in-house font, this makes the maps feel as though they exclusively belong to the Financial Times. White roads add a simple, familiar landscape and their hierarchy is delimited only by line widths - limiting the number of colors distracting users from more important content. Labels for all features balance between being backgrounded while maintaining legibility.
Previously publishers haven't had full design control over maps, but this is quickly changing. While at the moment we are offering custom branded maps to a select few partners, expect us to roll out this feature soon for all MapBox plans. Contact us if you're interested in our partnership programs.
The new Financial Times maps use a subdued version of the Financial Times color palette and the paper's in-house fonts, resulting in a design that seamlessly integrates with ft.com while being an extremely versatile canvas for a variety of applications.
Good Morning San Francisco!
stretch good morning SAN FRANCISCO! Starting today I'm rocking it ~50% of the time out here w/ the growing @MapBox team.
Weekend Hack: Printing 3D Tiles
A friend recently got a Makerbot (3D printer) at his work, and he invited me to play around with it. I thought it'd be neat to see what printing a map "tile" might look like. Unfortunately, his printer broke down late last week, but luckily our Portland office space has a Makerbot they charge for time on. So, after a handful of nights messing around, and a little cash, I managed to print Mt Hood based on open elevation data:
This print is based on this model, which you can see rendered in 3D on Github:
How I did it
- Get elevation data. I found the tilemill help page about working with terrain data, and picked SRTM. Mostly, this was because I was confused by NED. After figuring out how SRTM data is organized, I grabbed
http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_01/N45W122.hgt.zip
, which contains the area from Mt Hood to a bit north of the Columbia River. - Fill in data gaps with
gdal_fillnodata.py
. SRTM data has holes every here and there, particularly at the top of Mt Hood. - Build a greyscale color-relief image based on the cleaned data with
gdaldem
. - Crop the image to Mt Hood only. SRTM data comes as one degree by one degree chunks of the world, which is a little bigger than I wanted to print.
- Produce an STL model from the previous image. STL describes a 3d object with triangles. There's a neat tool for linux called
png23d
, which will take a PNG and try to build a model out of it. It uses the greyscale value at each pixel to determine the z axis. It works pretty well. - Repair the resulting model as best as is possible. That STL will have all sorts of disconnected edges, and places where more than two triangles meet. I used netfabb basic, which automatically removes most of those problems. Here's the model we got (wait for it to load). Github renders stl! Pretty cool. You can zoom in and change views and stuff.
- Slice the STL to S3G for actual printing, and send to the printer. S3G describes actual coordinates the printer should pump out plastic to. We ended up using ReplicatorG, as MakerWare seemed to quit printing at unfortunate times.
Bumps along the way
Some problems I ran into/things I did to fix problems:
- Printing is slow. This print took over two hours, and it's about 2.5" on a side.
- It can be a challenge to find the right
gdal
tool for the task at hand. - Figuring out how to fill in gaps in spotty data.
- Producing a decent color map for
gdaldem
. Mine is ok. - Generating STLs that don't have a bunch of problems.
png23d
seems pretty good, but we're asking a lot. Check out how many faces the model has! - What's up with the funky bump at the top of the mountain? That's from me not being good at data fixing.
- Sometimes the plastic extruders jam up, and printing has to start from scratch.
- Sometimes the model just breaks while being printed.
Here's a short video of things failing to print from a couple days back.
What can we do with this?
I am not sure, but some ideas:
- Automate the toolchain, so that it's easier to produce "tiles"
- Write something that breaks a large set of elevation data into a grid for printing. Example: you want a table-sized print of the North Cascades in Washington, so we break it up into manageable chunks for printing.
- Consider playing with building data in cities. Perhaps printing city blocks from building height data.
- Diorama contests
- Board game pieces
Tracking Mars Curiosity Rover
This weekend I made a map tracking the Rover's year-long journey across the Red Planet, all in celebration of the Curiosity Rover's one year anniversary on Mars -- and an excuse to dive back into my earlier mars mapping, making maps that foursquare eventually used for Curiosity Rover's check in at Gale Crater. Check out the full story in today's Wired, and here is a more technical writeup on how the maps were created as a straightforward web app.
Image Credit: NASA/JPL-Caltech
On Friday, a colleague from Wired reached out to me to see about getting Curiosity Rover GPS tracks to use in a map.
A traditional way of extracting this location information is to use SPICE data provided by NASA's Navigation and Ancillary and Information Facility (NAIF). SPICE data is very detailed but a technical challenge to use for someone like me, who does not get to hack on planetary data all day, every day. Luckily, I found another way to get this information.
When I was in San Francisco last month for the Mozilla/KQED Mars Hackathon, a colleague mentioned the Mars Science Laboratory's raw imagery JSON feeds. These feeds are regularly updated and provide records to all of the images captured by the Rover, by day. It took a bit of data wrangling to get the rover's daily location information (stored in a separate XML file from the image JSON feeds) to play nicely with the image feeds, but once they did, I had a geospatial and photographic record of the Rover's incredible past year.
Map Site
The finished map site takes advantage of our open source javascript library MapBox.js,which is tightly integrated with Leaflet. This integration makes the live vector rendering easy and cross-browser compatible.
The map features a HiRISE imagery mosaic basemap, created from 25cm resolution HiRISE orthoimages of Gale Crater and the surrounding area. Imagery provided by NASA JPL/University of Arizona High Resolution Imaging Experiment, on board the Mars Reconnaissance Orbiter.
HiRISE Image Credit: NASA JPL / Univ. Arizona
A semi-opaque yellow line shows the Rover's overall journey so far. Markers along the way denote Sols (day on the planet) for which I had location data. Visitors can either click on a marker on the path, or scroll through the overlay on the right to follow the rover as it journeys across the red planet.
Along the journey, a spotlight layer highlights the marker corresponding to the Sol described in the legend. The legend displays an image for each stop along the rover's path. Visitors can click on the image to see it in full-resolution at NASA/JPL, or click the link below it to see all of the images captured by the rover on that particular day.
The map site has a detailed Learn More section. Here, visitors can find information about the datasets used to make the map. I've included a methodology section describing how I generated the rover tracks dataset. My processing script, written in python, is publicly available on Github.
Track Curiosity
Check out the map site to follow Curiosity Rover's journey over the past year.
Header Image Credit: NASA/JPL-Caltech/MSSS
Happy Birthday OpenStreetMap!
August 9th OpenStreetMap's 9th birthday is coming up. To celebrate, we'll be hosting the local DC community's birthday sprint this Saturday August 10th at the MapBox garage. Join in for a day of hacking and mapping on OpenStreetMap. Here's a list of things people will be working on. Just show up, no RSVP required. This is an event for experienced hands and first timers.
OpenStreetMap Birthday Sprint DC
August 10th 12 - 6 PM
MapBox Garage DC
Faxing Outer Space
Today, requesting satellite imagery of anywhere in the world often requires a fax machine. The process is more like a game of telephone in order to gain access to some of the most high-tech tools in outer space.
Typically, someone wanting to buy imagery navigates a field of imagery resellers and provide an "Area of Interest" and timeframe when they'd like the imagery. The reseller faxes this information to a satellite imagery provider, who in turn responds with a quote. That quote is delivered by the reseller to the customer that wanted the imagery, who then decides whether or not to make the purchase.
MapBox Satellite Live is going to change this. With on-demand processes that remove the layers of communication, customers will gain direct access to the satellite imagery - kind of think of this like Twilio for satellites.
Above is an early look at the satellite tasking interface we are designing which will transmit directly to the satellite companies, no fax machine needed.
Sign up to be a beta tester– we're launching soon.
Financial Times Satellite
Last week the Financial Times launched with new custom branded MapBox maps in their signature pink and in-house fonts. These new maps also come with a labels and streets layer specifically designed for MapBox Satellite. While stylistically very similar, the Financial Times satellite layer is not just labels on transparent backgrounds. Labels, text halos and stroke sizes have all been manually adjusted to ensure legibility across a variety of backgrounds and zoom levels. Road networks are indicated in a subtle manner, providing just enough context to orient viewers without obscuring imagery.
Expanding our Satellite Processing with Astrium’s SPOT 6
As we prepare our MapBox Satellite Live service, we're continuing to expand our satellite imagery processing to include the best and newest sensors available – this time with SPOT 6 from Astrium. SPOT 6 shares the same orbit as Pléiades, which we processed last week, but SPOT 6 captures 8 times more area than Pléiades, at 1.5 meter/pixel resolution. It is a very agile companion in combination with Pléiades or when you are interested in a wide area.
The region we are processing is arround Morro da Boa Vista, Brazil. On July 22nd, this region witnessed snow, something that has only been recorded twice before (in 2005 and in 1965). The snow melted in a few days, but SPOT 6 was able to snap this image the morning after the snow, on July 23rd at 10 a.m. local time.
The SPOT 6 satellite shares the same orbit as the Pléiades twins (which we covered last week), only shifted between them in the orbit. Pléiades offers 20 km × 20 km images at 0.5 m resolution, while SPOT captures 60 km × 60 km footprints at 1.5 m. Together with SPOT 7 – the twin of SPOT 6, due to launch this year– these four satellites evenly spaced on the same orbit cover the whole earth twice daily.
This combination of satellites responds to different needs for contextual or focused images. For example, wide area flooding can be imaged every day by SPOT satellites, while Pléiades can complement this with very high resolution updates of populated areas and daily revisit capability.
Astrium’s constellation offers daily revisit of any point, a ground segment with 6 daily uplink opportunities, and 6 million km2 of imagery captured every day.
Whether you task SPOT 6 or acquire an image from the massive SPOT archive, the source imagery can be processed in a matter of minutes into a finished map. Astrium provides a very useful technical description of SPOT 6 images. Our source imagery was delivered both pansharpened and orthorectified by Astrium, and divided into four tile subsets. In the steps below, I'll walk you through the steps to process each tile.
The step-by-step process is similar to the one we covered last week:
- Reproject into Web Mercator tiles
- Adjust color
- Render with TileMill and upload to MapBox.com
1. Install all the software you need.
If you are on OS X this should get you going (use sudo apt-get
instead of brew
if you are on Ubuntu):
brew install imagemagick --with-libtiff
2. Reproject to Web Mercator
I'm going to remove the 4th layer since in this particular guide we are not interested in the infrared channel. In this step I also specify I want a Byte unit (not Int) and we scale all images to the same intensity. I used gdalinfo -stats
on the images to take a look at the brightness values. An average around 200 and a maximum of around 3000 already tells me that the image is on the dark side (shadowed valleys) but with bright spots (snowy peaks). This will make the color correction a bit complicated when we want to retain clear detail across intensity levels. Lastly, on most of these steps, we'll be using virtual rasters to save a lot of time and reprocessing. Here's a complete translation function:
gdal_translate \
-b 1 \
-b 2 \
-b 3 \
-ot Byte \
-scale 0 3000 0 255 \
-co PHOTOMETRIC=RGB \
-of VRT \$INPUT\${INPUT%.tif}.vrt;
Note: we use the bash %
substring removal here to give the output file a different filename from the input image. This is a convenient way to set input and output from a single variable.
We use gdalwarp
to transform the imagery from EPSG:32722
, its original projection (which can be found with gdalinfo
– it will be different for different imagery), into Web Mercator, EPSG:3857
. We also take this opportunity to add overviews, which speed up some operations.
function warp(){INPUT=$1;
OUTPUT=$2;
gdalwarp \
-q \
-s_srs EPSG:32722 \
-t_srs EPSG:3857 \$INPUT\$OUTPUT;
gdaladdo \
-q\
-r cubic \$OUTPUT\
2 4 8 16;
}
To invoke this function, run warp $INPUT ${INPUT%.tif}_3857.tif
for each tile.
3. Color Adjustment
The last step is to adjust the color of the rasters. This one is particularly tricky.
The region we are observing has deep valleys and many shades of green. The local sun elevation is 30° and thus shadows appear, giving the darkest parts of the image very fine contrast levels. On top of that, the peaks have highly reflective snow, and so the brightest parts of the image also has very fine contrast levels. After a few iterations I found this convert
combination to increase the imagery's dynamic range across intensities, avoid saturation on each level, and keep colors natural.
convert \
-modulate 100,120 \
-channel B -gamma 0.85 \
-channel RGB \
-sigmoidal-contrast 9x10% \
-gamma 1.2
ImageMagick's convert
, however, does not understand geocoordinates on images, so we need to recover those after adjusting the color. To do that we copy over the .tfw
world file and we use gdal_translate
with the color-adjusted tiles. As before, we create overviews for each tile. The combined process looks like this:
function color_correct(){INPUT=$1;
mkdir -p color;
listgeo -tfw $INPUT 2> /dev/null;
cp ${INPUT%.tif}.tfw color/${INPUT%.tif}.tfw;
convert \
-modulate 100,120 \
-channel B -gamma 0.85 \
-channel RGB \
-sigmoidal-contrast 9x10% \
-gamma 1.2 \$INPUT -compress lzw color/$INPUT 2> /dev/null;
gdal_translate \
-q \
-a_srs EPSG:3857 \
-a_nodata "0 0 0"\
-of VRT \
color/$INPUT\
color/${INPUT%.tif}.vrt;
rm color/$INPUT;
gdaladdo \
-q \
-r cubic \
color/${INPUT%.tif}.vrt \
2 4 8 16;
}
After color-correcting each tile, by running color_correct ${INPUT%.tif}_3857.tif
from the warping step above, the last step is to create a virtual raster of the finished tiles:
gdalbuildvrt \
color.vrt \$(pwd)/color/*.vrt;
With our mosaic VRT created, we are ready to add the SPOT 6 imagery to TileMill. The CartoCSS is 2 lines of code:
#color{raster-opacity:1;raster-scaling:lanczos;}
We use raster-scaling: lanczos
in the CartoCSS to specify how Mapnik behaves when interpolating pixel values.
Finished Map
Check out our increasing array of processing guidelines and drop us a note if you have other sources. We're also streamlining the tasking system to get your data from space as quickly and easily as possible – no fax machine required.
Ping us on Twitter with ideas and feedback: Chris (@hrwgc), Charlie (@vruba), or myself (@brunosan).