Amy Whitehead's Research

the ecological musings of a conservation biologist

Leave a comment

Extracting raster data using a shapefile

I recently had an email from a PhD student in Austria who had a raster showing the distribution of Douglas Fir in Europe and wanted to know what proportion of each European country was covered in this species. They had a raster with presence (1) and absence (0) of Douglas-fir in Europe and wanted to calculate the number of cells with 1 and 0 within each country of the Europe. I’ve put together a dummy example below which shows how to R script to extract the number of raster cells in each country that meet a certain condition.


Douglas Fir (source: Pixabay)

Essentially the script works through the following steps:

  1. Loads the relevant shapefile and raster datasets.
  2. Identifies all of the countries within the shapefile.
  3. Within a loop, masks the presence-absence raster by each country and counts the number of cells that meet the required condition.

Continue reading


Converting shapefiles to rasters in R

I’ve been doing a lot of analyses recently that need rasters representing features in the landscape. In most cases, these data have been supplied as shapefiles, so I needed to quickly extract parts of a shapefile dataset and convert them to a raster in a standardised format. Preferably with as little repetitive coding as possible. So I created a simple and relatively flexible function to do the job for me.

The function requires two main input files: the shapefile (shp) that you want to convert and a raster that represents the background area (mask.raster), with your desired extent and resolution. The value of the background raster should be set to a constant value that will represent the absence of the data in the shapefile (I typically use zero).

The function steps through the following:

  1. Optional: If shp is not in the same projection as the mask.raster, set the current projection (proj.from) and then transform the shapefile to the new projection ( using transform=TRUE.
  2. Convert shp to a raster based on the specifications of mask.raster (i.e. same extent and resolution).
  3. Set the value of the cells of the raster that represent the polygon to the desired value.
  4. Merge the raster with mask.raster, so that the background values are equal to the value of mask.raster.
  5. Export as a tiff file in the working directory with the label specified in the function call.
  6. If desired, plot the new raster using map=TRUE.
  7. Return as an object in the global R environment.

The function is relatively quick, although is somewhat dependant on how complicated your shapefile is. The more individual polygons that need to filtered through and extracted, the longer it will take. Continue reading


Creating a presence-absence raster from point data

I’m working on generating species distribution models at the moment for a few hundred species. Which means that I’m trying to automate as many steps as possible in R to avoid having to click buttons hundreds of times in ArcView.

One of the tasks that I need to do is to convert presence-only latitude and longitude data into a presence-absence raster for each species. It seems like this would be something that relatively simple but it took me longer than it should have to figure it out. So I’m posting my code here so 1) I don’t forget how I did it; and 2) because I had someone ask me how to exactly this thing this afternoon and it took me ages to hunt through my poorly organised files to find this piece of code! So here it is:

Because I’m a function kinda girl, I wrote this as a function. It basically goes through three steps:

1. Take an existing raster of the area you are interested in mask.raster and set the background cells to zero (absences).

2. rasterize the presence points for your species and set those cells to one (presences).

3. Label the new raster by your species names raster.label and save it as a new raster.

presence.absence.raster <- function (mask.raster,,raster.label="") {

# set the background cells in the raster to 0
mask.raster[!] <- 0

#set the cells that contain points to 1
speciesRaster <- rasterize(,mask.raster,field=1)
speciesRaster <- merge(speciesRaster,mask.raster)

#label the raster
names(speciesRaster) <- raster.label

Below is an example of how the function works using data on the global distribution of foxes data from the biomod2 package.


# read in species point data and extract data for foxes
mySpecies <- read.csv(system.file("external/species/mammals_table.csv", package="biomod2"), row.names = 1)
species <- "VulpesVulpes"

# extract fox data from larger dataset and keep only the x and y coordinates <- mySpecies[,c("X_WGS84", "Y_WGS84",species)] <-[$VulpesVulpes==1,c("X_WGS84", "Y_WGS84")]

# read in a raster of the world
myRaster <- raster(system.file( "external/bioclim/current/bio3.grd",package="biomod2"))

# create presence absence raster for foxes
pa.raster <- presence.absence.raster(mask.raster=myRaster,, raster.label=species)
plot(pa.raster, main=names(pa.raster))


In this plot, the presences (1) are shown in green and the absences (0) in light grey.

Helpful things to remember (or things I learnt the hard way)

  1. Make sure your species point data and raster are in the same projection and that they actually overlap!
  2. Set your desired raster extent and resolution in the mask.raster before you get started.
  3. The species point data that you feed into the function should just be a list of x and y co-ordinates – no species names or abundances or you’ll confuse the poor beast and it won’t work!

And yes, foxes are also present in Australia where they are a pest. I guess this map shows their natural range before people started doing silly things.

Bought to you by the powers of knitr & RWordpress 

(Well it was and then things went a little awry, so I had to do some tinkering!)

Leave a comment

Spatial troubles: Converting gps data between formats

I work a lot with spatial data which is great but it means I spend a lot of time cursing at my computer.  One of the problems with spatial data is the myriad of formats that it comes in and the intricacies of trying to read it into the variety of hardware and software available to use it.  One of the best ways I have found for navigating my way through this minefield has been drawing on the expertise of others, particularly blog posts where people have outlined potential solutions.  So I’m going to try and document some of my own solutions here.  This is partly for my own benefit, so I don’t have to reinvent the wheel every time I do something, but it would also be great to get feedback from those out in the blogosphere.  So I welcome your comments and suggestions.

I’m currently working with Adélie penguins at Cape Bird on Ross Island in Antarctica. Part of this work involves monitoring the nests of banded birds, where each nest is labelled with a numbered tag.  This is a great system except that the band numbers are not the same as the numbers on the tags and you can’t read the tags after a few weeks anyway as they get covered in penguin poo.  So we quite often need to use a gps to find and identify which nest we are looking at.

Banded Adelie penguin on a marked nest

A banded Adelie penguin with a satellite tag that it will wear for one foraging trip of 1-3 days. Note the yellow nest tag in front of the nest already partially covered in poo.

Our nest location data is entered manually from the gps into a FoxPro database (not the most efficient system) and then can be extracted again as a text file.  Our biggest problem is sharing data between multiple researchers in the field who use different gps units.  Which means converting our basic text file into multiple formats, which can be a trying exercise.

My goal was to find an efficient way of creating a file where the waypoints are labelled by nest number and the band number of the associated bird listed in the comments so we could cross-reference and make sure we were reading the poopy nest tag correctly.  And then convert this file to the appropriate file types needed for each of our gps units.

So I was pleased to discover GPSBabel, a free piece of software for converting gps data between multiple formats.  It took me a wee while to navigate my way around and figure out what I needed to do but now that I have figured it out, it is a very simple process using the Universal CSV option.

Converting spatial data from text file to gdb

1.  Open the text file in Excel (or your favourite spreadsheet program), rename the data columns using the appropriate keywords from the GPSBabel documentation (there are many more keywords listed online) and save as a .csv file.
alt =      Altitude
date =   Date (yyyy/mm/dd)
icon =    Symbol (icon) name
lat =      Latitude
lon =      Longitude
name = Waypoint name (“Shortname”)
comment = Notes

So the top few rows of my file look something like this, where name represents the nest tag number and comment is the band number of the associated bird.


2.  Open GPSBabel and set the input file to your newly formatted csv file, setting the format to Universal csv with field structure in the first line.

3.  Select the appropriate output format (I’ve been mostly using Garmin MapSource – gdb), name the output file and hit apply.

One of the nice features of GPSBabel is that it will open your data in Google Maps after the conversion, which provides a nice check that everything has worked how you expected.

From here, I can open the .gdb file in Garmin’s MapSource and upload it to the gps.  And then use the same .csv input file to generate other output files suitable for upload to our other gps units.  A nice simple solution.