Parsing USGS BIL Digital Elevation Models in Python

A false colour elevation map of the Hawaiian island of Kahoolawe

The USGS have a digital elevation model covering most of the US, Hawaii and Puerto Rico. If you use the National Map Seamless Server it’s possible to select a region and download a digital elevation model in a number of formats. One of these, BIL, is a simple binary raster file, which can be read quite easily in Python – in this, I’ll base this on code from a previous post.

Getting the data

  • Go to National Map Seamless Server
  • The map is fairly intuitive, but if you get stuck check out their tutorial
  • When you use the download rectangle tool, make sure to click on “Modify data request” and choose the BIL data format (BIL_INT16) rather than the default “Arc_GIS”
  • Choose zip or TGZ
  • You’ll then download a compressed archive – may take a while
  • The archive contains lots of files; look at the contents of output_parameters.txt. This tells you the dimensions of the file. Make a note of the sizes.
  • Extract the .bil file – this is what has the raw data.

Extracting the binary elevation data from the BIL file

# USGS Seamless Server BIL Format Parser
#

import struct

# get these from the output_parameters.txt file
# included in the download
height=437
width=663

# where you put the extracted BIL file
fi=open(r"my-extracted.bil","rb")
contents=fi.read()
fi.close()

# unpack binary data into a flat tuple z
s="<%dH" % (int(width*height),)
z=struct.unpack(s, contents)

After running this, z is now a tuple of width*height elements, with data stored in raster order (entries in row 1 from left-to-right, then row 2 from left-to-right, and so on)

The struct module for reading binary data

struct is a fantastic Python library, it really makes reading binary files pretty easy. Say we have a raster-layout binary file of 50 rows by 100 values per row. We’d expect 5000 values. Each value is an unsigned short (H) and little-endian (<)

In this case, we build up a format specification like this
“<5000H”

Plotting the end result

And optionally, if you have matplotlib and numpy libraries, you can get a quick preview of the resulting array.

from pylab import *
import numpy as np
import matplotlib.cm as cm

heights = np.zeros((height,width))
for r in range(0,height):
    for c in range(0,width):
        elevation=z[((width)*r)+c]
        if (elevation==65535 or elevation<0 or elevation>20000):
            # may not be needed depending on format, and the "magic number"
            # value used for 'void' or missing data
            elevation=0.0
        heights[r][c]=float(elevation)
imshow(heights, interpolation='bilinear',cmap=cm.prism,alpha=1.0)
grid(False)
show()
Advertisements

Using R and ggplot2 to create a timeline visualization of UK government politics

creating an abstract timeline chart with R and ggplot

This was an attempt to visualise the ebb and flow of various political parties in the UK, over time.

The data was made available on the Guardian DataBlog, so I thought I’d have a go at producing a timeline image, using vertical strips of colour to show which party was in power in each year.

I’m not an expert in R, so there may well be a better way of doing this!

The steps taken were:-

  • download the OpenOffice spreadsheet from the Google Spreadsheet linked to on the Guardian DataBlog.
  • remove footer rows (copyright notices at the foot of the spreadsheet).
  • tidy up headers in row 1 – simplify these to single words like “Party”
  • save as a CSV, using the pipe (|) character as a delimiter. I put this in c:\infoviz\data.csv

First of all, we need to load the CSV file into a data frame

inp <- read.table("c:\\infoviz\\data.csv",header=T, sep="|")

This works fine, but the column ‘Party’ contains names like “Conservative”, “Labour” and “Liberal”.

How to convert this into something a bit more abstract, like a colour? The answer was to define a palette of colours…

palette <- c("blue", "red", "orange1", "rosybrown", "orange3", "red4", "blue4", "grey") 

.. and use a function to change a party name to a colour mentioned in that array…

getcol <- function(party) { 
	if ( party=="Conservative" | party=="Tory" )
         r<-1
     	else if ( party=="Labour" ) 
         r<-2
     	else if ( party=="Liberal" ) 
         r<-3
     	else if ( party=="Whig" ) 
         r<-4
	else if (party=="National Labour National Government") 
	   r<-6
	else if (party=="Conservative National Government") 
	   r<-7
     	else 
         r<-8
	return(palette[r])
}

Now, we can use the sapply function to map the values in the “party” column of the data frame into a colour..

inp$PartyCol <- sapply(inp$Party,getcol)

Now, the data frame has a column called ‘PartyCol’ which maps “Conservative” to “blue”, “Labour” to “red” and so on.

So now, ggplot…

qplot(factor(Year), title="history", data=inp, geom="bar", fill=factor(PartyCol), color=factor(PartyCol)) +scale_fill_identity(name=inp$PartyCol) +scale_colour_identity(name=inp$YearCol)

This shows a whole lot of labels and grids that I didn’t want…,

…so I added these options to give a clear, blank theme…

+ opts(panel.background = theme_rect(size = 1, colour = "lightgray"),panel.grid.major = theme_blank(),panel.grid.minor = theme_blank(),axis.line = theme_blank(),axis.text.x = theme_blank(),axis.text.y = theme_blank(),axis.title.x = theme_blank(),axis.title.y = theme_blank(), axis.ticks = theme_blank(),strip.background = theme_blank(),strip.text.y = theme_blank())

Here’s the whole source, if you’re brave enough to want to try this yourself!

library(ggplot2)
library(rgb)
palette <- c("blue", "red", "orange1", "rosybrown", "orange3", "red4", "blue4", "grey") 
getcol <- function(party) { 
	if ( party=="Conservative" | party=="Tory" )
         r<-1
     	else if ( party=="Labour" ) 
         r<-2
     	else if ( party=="Liberal" ) 
         r<-3
     	else if ( party=="Whig" ) 
         r<-4
	else if (party=="National Labour National Government") 
	   r<-6
	else if (party=="Conservative National Government") 
	   r<-7
     	else 
         r<-8
	return(palette[r])
}
inp <- read.table("c:\\infoviz\\data.csv",header=T, sep="|")
inp$PartyCol <- sapply(inp$Party,getcol)
qplot(factor(Year), title="history", data=inp, geom="bar", fill=factor(PartyCol), color=factor(PartyCol)) +scale_fill_identity(name=inp$PartyCol) +scale_colour_identity(name=inp$YearCol) + opts(panel.background = theme_rect(size = 1, colour = "lightgray"),panel.grid.major = theme_blank(),panel.grid.minor = theme_blank(),axis.line = theme_blank(),axis.text.x = theme_blank(),axis.text.y = theme_blank(),axis.title.x = theme_blank(),axis.title.y = theme_blank(), axis.ticks = theme_blank(),strip.background = theme_blank(),strip.text.y = theme_blank())

Plotting postcode density heatmaps in R

April 21, 2010 Leave a comment

Postcode density heatmap for edinburgh

Here in the UK, postcode geodata was recently released as part of the OS Opendata initiative. Although not the full postcode address file (PAF), it’s enough to be useful for visualisation purposes!

I thought it’d be interesting to plot the density of postcodes in a heatmap, as a way of getting a visualisation of population density.

I downloaded a copy from MySociety. The raw data uses the Ordnance Survey coordinate systems (Eastings and Northings), but MySociety also provide a version with coordinates converted to the WGS84 datum (latitude and longitude). It gets better; one file per outcode (so all the EH postcodes go in one file, for example).

So how to plot this?

I’ve been using R with the ggplot2 library for visualisations; it’s an amazing toolset, and you’d be surprised how few lines of code it takes to get results.

First of all, download the data.

Now, extract the data for the EH postcode area.

The first step is to load the extracted CSV file into a data frame. There are no column headers (header=F) and the file is comma separated (sep=”,”)

inp <- read.table("c:\\infoviz\\ehpostcode.csv",header=F, sep=",")

Next, a quick hack to remove certain postcodes which seem way out of the region of Edinburgh. I’ve yet to work out what these stray postcodes are.

inp <- subset(inp,inp$V10 != "")

Now we bring in the ggplot2 library..

library(ggplot2)

Now the plot itself.

m <- qplot(xlab="Longitude",ylab="Latitude",main="EH Postcode heatmap",geom="blank",x=inp$V3,y=inp$V4,data=inp)  + stat_bin2d(bins =200,aes(fill = log1p(..count..))) 

The stat_bin2d does the binned heatmap plot. It splits the area into a grid of 200×200 (‘bins’) and counts the number of postcodes in each grid square. These counts are then scaled logarithmically (fill = log1p(..count..)).

Finally, we plot this puppy.

m 

That’s it! Five lines of code.

Here’s the whole chunk..

inp <- read.table("c:\\infoviz\\ehpostcode.csv",header=F, sep=",")
inp <- subset(inp,inp$V10 != "")
library(ggplot2)
m <- qplot(xlab="Longitude",ylab="Latitude",main="EH Postcode heatmap",geom="blank",x=inp$V3,y=inp$V4,data=inp)  + stat_bin2d(bins =200,aes(fill = log1p(..count..))) 
m 

If you set the boundaries of the x and y axes, you can then use the resulting image as an overlay in Google Earth.

I found that the heatmap is a reasonably good proxy for population density – but with exceptions. There are hotspots that turn out to be postal sorting offices – PO boxes and post restante.

Plotting points on an OpenStreetMap Export

February 24, 2010 Leave a comment

This map shows the location of pictures in a Flickr group superimposed on an OpenStreetMap (OSM) export.

If you try plotting points directly on an OSM map, you’ll find that points are all over the shop. The reason is that OSM exports use the Mercator projection; you need to change the latitude and longitude coordinates into the Mercator projection.

Basemap to the rescue!

If you use the code in the previous post, you can change the plotting code.

from basemap import Basemap 
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import numpy as np
import string
import matplotlib.cm as cm

x=[] #longitudes
y=[] #latitudes

fi=open(r'C:\infoviz\scotland_photos.csv','r')

linenum=0
for line in fi:
    if linenum>0:
        line=string.replace(line, "\n","")
        try:
            fields=string.split(line,",")
            lon,lat=fields[0:2]
            x.append(float(lon))
            y.append(float(lat))
        except:
            print "Error!"
    linenum+=1
fi.close()

m = Basemap(llcrnrlon=-8.0,llcrnrlat=54.5,urcrnrlon=1.5,urcrnrlat=59.5,lat_ts=20,
            resolution='h',projection='merc',lon_0=-4.36,lat_0=54.5)
x1,y1=m(x,y)
m.drawmapboundary(fill_color='white') # fill to edge
m.scatter(x1,y1,s=5,c='r',marker="o",cmap=cm.jet,alpha=1.0)

Rather than using basemap to draw the outline of Scotland, this script simply creates a scatter plot on a white background, like so:-

Now, you need to export the map from OpenStreetMap.

The corners have been set as follows..

… llcrnrlon=-8.0,llcrnrlat=54.5,urcrnrlon=1.5,urcrnrlat=59.5 …

llcrn stands for the lower-left coordinate, and and urcrn for the upper-right coordinate.

So if you if you export from OpenStreetMap using these coordinates…

.. you’ll have a map of Scotland with the Mercator projection.

The two images can then be composited in Photoshop or another graphics app like the Gimp.

Creating a pinboard map of geotagged photos in a flickr pool

January 14, 2010 1 comment

In this post I’ll show how to produce a simple pinboard map of geotagged photos in a flickr group pool, using Python and Basemap/Matplotlib. You’ll need:-

There are two short scripts here:-

  • A script to find the longitude and latitude of geotagged photos in the group pool
  • A script to generate the plot

The first script produces a CSV file; the second uses this CSV file to produce the plot.

Here’s the script to produce the CSV file with photo locations:-

# -*- coding: UTF8 -*-
'''
Created on 12 May 2009
Based on beejs flickr API
Produce a list of photo locations for a given
group's pool on flickr
@author: Steven Kay
'''

import flickrapi
import string
import datetime
import string
import time

# Enter your API key below
# You can apply for an API key at 
# http://www.flickr.com/services/apps/create/apply
api_key = '' 

# paste group NSID below
group = '1124494@N22'

# sample... fetch your latest images with 
# a count of the views, faves and comments

if __name__ == '__main__':
    flickr = flickrapi.FlickrAPI(api_key)
    response_photos = flickr.groups_pools_getPhotos(group_id=group,per_page=500,extras='geo')
    root=response_photos.findall('.//photos')
    pages=int(root[0].get('pages'))
    if pages>8:
        # stop after 8 pages of 500 images
        # not sure if groups.pools.getPhotos has the same
        # 4000 image limit as photos.search..?
        pages=8
    
    fo=open(r"C:\infoviz\scotland_photos.csv","w")
    print "Longitude,Latitude"
    fo.write("Longitude,Latitude\n")
    for page in range(0,pages):
        response_photos = flickr.groups_pools_getPhotos(group_id=group,per_page=500,page=str(page),extras='geo') 
        for photo in response_photos.findall(".//photos/photo"):
            try:
                lat=photo.get('latitude')
                lon=photo.get('longitude')
                st="%s,%s" %(lon,lat)
                if not st=="0,0":
                    # ignore the odd buggy 0,0 coords
                    print "%s,%s" %(lon,lat)
                    fo.write("%s,%s\n" %(lon,lat))
            except:
                pass
        time.sleep(1)
    fo.close()

You’ll need to find the NSID of the group as an input; you can find this with the flickr API call flickr.group.search.

Now, you have a simple CSV file with the latitude and longitude of each geotagged image in the pool.

Longitude,Latitude
-5.167792,58.352519
-4.024359,57.675544
-4.230251,57.497356
-4.2348,57.501045
-4.84703,56.646034
-4.306168,55.873986
-3.586263,56.564732
...

This demo uses the Photography Guide to Scotland pool.

The next step is to plot the map.

'''
Simple Matplotlib/Basemap pinboard map for
Flickr Groups.

Need to provide a CSV file in following format

Longitude,Latitude
20.1,-3.25
20.225,-3.125
.. etc..

Created on 10 Oct 2009

@author: Steven Kay
'''

from basemap import Basemap 
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import numpy as np
import string
import matplotlib.cm as cm

x=[] #longitudes
y=[] #latitudes

fi=open(r'C:\infoviz\scotland_photos.csv','r')

linenum=0
for line in fi:
    if linenum>0:
        line=string.replace(line, "\n","")
        try:
            fields=string.split(line,",")
            lon,lat=fields[0:2]
            x.append(float(lon))
            y.append(float(lat))
        except:
            pass
    linenum+=1
fi.close()

# cass projection centred on scotland
# will need to replace with a projection more suited
# to the group you're plotting

m = Basemap(llcrnrlon=-8.0,llcrnrlat=54.5,urcrnrlon=1.5,urcrnrlat=59.5,
            resolution='h',projection='cass',lon_0=-4.36,lat_0=54.5)
x1,y1=m(x,y)
m.drawmapboundary(fill_color='cyan') # fill to edge
m.drawcountries()
m.drawrivers() # you may want to turn this off for larger areas like continents
m.fillcontinents(color='white',lake_color='cyan',zorder=0)
m.scatter(x1,y1,s=5,c='r',marker="o",cmap=cm.jet,alpha=1.0)

plt.title("Photography Guide to Scotland in FlickR") # might want to change this!
plt.show()

This script uses a projection centred around scotland; you’ll need to change the following line…

m = Basemap(llcrnrlon=-8.0,llcrnrlat=54.5,urcrnrlon=1.5,urcrnrlat=59.5,
            resolution='h',projection='cass',lon_0=-4.36,lat_0=54.5)

…to something more suitable for your needs. Basemap provides an intimidating list of projections which should meet your needs.

Processing Sketches

January 11, 2010 Leave a comment

I’ve been experimenting with the Processing Language recently. It’s quite addictive 🙂

Here are a few tasters… click through to see the animation (needs Java) and the source.

ripples processing sketch

You can see the rest of my Processing portfolio on Openprocessing.org

Categories: Processing Tags: , ,

world endangered species map

October 25, 2009 Leave a comment

visualizing the number of endangered species worldwide.

Outer circle represents number of species in total; green inner circle is the proportion that are plant species.

sources of data – Guardian Data Blog. Country locations from CIA World Factbook. Plotted using python/matplotlib with baseline extension.

Categories: Basemap, Infovis, Matplotlib