GISRUK 2012 – Wednesday

GISRUK 2012 was held in Lancaster, hosted by Lancaster University. The conference aimed to cover a broad range of subjects including Environmental Geoinformatics, Open GIS, Social GIS, Landscape Visibility and Visualisation and Remote Sensing. In addition to the traditional format, this years event celebrated the career of Stan Openshaw, a pioneer in the field of computational statistics and a driving force in the early days of GIS.

Wednesday

The conference kicked off with a Keynote from Professor Peter Atkinson of the University of Southampton.  This demonstrated the use of remotely sensed data to conduct spatial and temporal monitoring of environmental properties. Landsat data provides researchers with 40 years of data making it possible to track longer term changes. Peter gave two use case examples:

  1. River channel monitoring on the Ganges. The Ganges forms the International boundary between India and Bangladesh, understanding channel migration is extremely important for both countries.  The influece of man-made structures, such as barrages to divert water to Calcutta, can have a measurable effect on the river channel. Barrages were found to stabalise the migrating channel
  2. Monitoring regional phenology. Studying the biomass of vegetation is tricky but using “greenness” as an indicator provides a useful measure. Greenness can then be calculated for large areas, up to continent scale.  Peter gave an example where MODIS and MERIS data had been used to calculate the greenness of India. Analysis at this scale and resolution reveals patterns and regional variation such as the apparent “double greening” of the western Ganges basin which would allow farmers to have two harvests for some crops.

However, these monitoring methods are not without their challenges and limitations.  Remote sensing data provides continuous data based on a regular grid.  Ground based measurements are sparse and may not tie in, spatially or temporally, with the remotely sensed data. Ground based phenology measurements can be derived using a number of methods making it difficult to make comparisons.  A possible solution would be to adopt a crowd-sourcing technique where data is collected and submitted from enthusiasts in the field. This would certainly result in better spatial distributions of ground based measurements, but would the resulting data be reliable? Automatically calculating the greening from web-cams is currently being trialed.

The first session was then brought to a close with two talks on the use of terrestrial lidar. Andrew Bell (Queens University, Belfast) was investigating the use of terrestrial LiDAR for monitoring slopes.  DEMs were created from the scans and this was used to detect changes in slope, roughness and surface.  The project aims to create a probability map to identify surface that are likely to fail and cause a hazard to the public.  Andrew’s team will soon receive some new airbourne LiDAR data, however I feel that if this technique is to be useful to the highways agency, the LiDAR would have to mounted on a car as cost and repeatability would be two key drivers.  Andrew pointed out that this would reduce the accuracy of the data but perhaps such a reduction would be acceptable and change would still be detectable.

Neil Slatcher’s (Lancaster University) paper discussed the importance of calculating the optimum location to depoly a terrestrial scanner.  Neil’s research concentrated on lava flows which meant the landscape was rugged, some areas were inaccessible and the target was dynamic and had to be scanned in a relatively short period of time. When a target cannot be fully covered by just one scan analysis of the best positions to give complete coverage is needed.  Further, with a 10Hz scanner you could make 10 measurements per second which seems quick but a dense grid can result in scan times in excess of 3hrs.  By sub-dividing the scan into smaller scan windows that are centred over the target you can significantly reduce the size of the grid and the number of measurements required and hence the time it takes to acquire the data. This method had reduced scan times from 3 hrs to 1hr15mins.

The final session of the day had two parallel sessions, one on Mining Social Media and the other on Spatial Statistics.  Both interesting subjects but i opted to attend the Socail Media strand.

  • Lex Comber (University of Leicester) gave a presentation on Exploring the geographies in social networks.  This highlighted that there are many methods for identifying clusters or communities in social data but that the methods for understanding what a community means are still quite primitive.
  • Jonny Huck (Lancaster University) presented on Geocoding for social networking of social data.  This focused on the Royal Wedding as it was an announced event that was expected to generate traffic on social media allowing the team to plan rather than react. They found that less than 1% of tweets contained explicit location information. You could parse the tweets to extract geographic information but this introduced considerable uncertainty.  Another option was to use the location info in users profiles and assume they were at that location.  The research looked at defining levels of detail, so Lancaster Uni  Campus would be defined as Lancaster University Campus / Lancaster/Lancashire / England /UK.  By geocoding the tweets at as many levels of detail as possible you could then run analysis at the appropriate level.  What you had to be careful of was creating false hot-spots at the centroids of each country.
  • Omar Chaudhry (University of Edinburgh) explained the difficulties in Modelling Confidence in Extraction of Place Tags from Flickr.  Using a test case of Edinburgh they tried to use Flickr tags to define the dominant feature of grid cell covering central Edinburgh.  Issues arose when many photo’s were tagged for a personal event such as a wedding and efforts were made to reduce the impact of these events. Weighting the importance of the tag against the number of users who used it, rather than the absolute number of times it was used seemed to improve results. There was still the issue of tags relating to what the photo was of, rather than were it was taken.  Large features such as the Castle and Arthur’s Seat dominated the coarser grids as they are visible over a wide area.
  • Andy Turner and Nick Malleson (University of Leeds) gave a double header as they explined Applying geographical clustering methods to analyse geo-located open micro-blog posts: a case study of tweets around Leeds.  The research showed just how much information you could extract from location information in tweets, almost giving you a socio-economic profile of the people. There was some interesting discussion around the ethics of this, specifically in elation to the data protection act.  This clearly states that you can use the data for the purpose that it was collected for.  Would this research/profiling be considered what the original data had been collected for?  Probably not.  However, that was part of the research, to see what you could do and hence what companies could do if social media sites such as twitter start to allow commercial organisations to access your personal info. For more information on this look at this paper, or check out Nick’s Blog
  • One paper that was suggested as a good read on relating tweets to place and space was Tweets from Justin Bieber’s heart: the dynamics of the location field in user profiles.

I will post a summary of Thursday as soon as I can.

About Addy Pope

Addy is a member of the GeoData team at EDINA and work on services such as GoGeo, ShareGeo and the FieldtripGB app. Addy has over 10 years experience as a geospatial analyst. Addy tweets as @go_geo
This entry was posted in GIS, Remote Sensing, Research, Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>