Thursday, November 1, 2012

Mapping Hurricane Sandy

I am sure it is not news to a lot of people that the path, destruction and emergency facilities for Hurricane Sandy have been mapped.  The BBC provides a map of damage and emergecny faclitites (http://www.bbc.co.uk/news/world-us-canada-20154479). Likewise other news organisations provide similar information to readers, but these maps tend to be fairly static in providing a single set of inforamtion. Although you can zoom and change scale on the maps, they have a relatively low level of interactivity. These maps also rely on information being provided to the organisation from another source.

Integrating data sources and providing maps of informtion that is of use to a range of users is another, more difficult task. One of the key players in online mapping has been Google Interactive Maps ( http://google.org/crisismap/sandy-2012). The map for Hurriance Sandy is produced under the umbrella organsiation of Google Crisis Response (http://www.google.org/crisisresponse/) a Google project that collaborates with NGOs, government agencies and commerical organisations to provide information on things such as storm paths and emergency faciltities. Integrating these data sources to provide spaitally located information of use to responders, locals affected by the hazard, interested news readers and authorities involves a clear understanding of the requirements of each target audience and clear planning of the nature of information presentation for each of these. It is worth having a look at the interactive maps to assess for yourself whether this complex task has been achieved. In particular look at the type of information provided about particular resources, the nature of the resources mapped and the scale or level at which this information is displayed and to which part of the audience you beleive this information will be of use (why you think so is the next question).

It is also useful to compare this structured information with the less structured flow of information from Twitter (http://www.guardian.co.uk/news/datablog/2012/oct/31/twitter-sandy-flooding?INTCMP=SRCH). This analysis by Mark Graham, Adham Tamer, Ning Wang and Scott Hale looked at the use of the term 'flood' and 'flooding' in tweets in relation to Hurricane Sandy (see also blog at http://www.zerogeography.net/2012/10/data-shadows-of-hurricane.html. Although they were initially assessing if there was a difference in English language and Spanish language tweets about the storm, their analysis pointed out that tweets were not that useful at providing information aobut the storm at a spatial resolution of smaller than a county (although it isn't clear to me at least if they were mapping the location of the tweets or the contents of the tweets - in some cases the latter might provide some more detailed spatial information on flooding and its impact but would require extraction and interpretation from the tweet itself). This lower spatial resolution and the unfiltered, personal, subjective and unco-ordinated nature of this information source means that it is more difficult to quickly translate into information that is 'useable' by other audiences. Mark Graham is a researcher at the Oxford Internet Institute (http://www.oii.ox.ac.uk/) and his webpage (http://www.oii.ox.ac.uk/people/?id=165) and blog are definitely worth look for anyone interested in mapping and internet and mobile technologies.

 

Monday, October 29, 2012

Using St Paul’s Erosion Data to Predict Future Stone Decay in Central London

In a recent blog I mentioned a research project just completed on a 30-year remeasurement of stone decay on St Paul’s cathedral in central London. A second paper looks at how this data might be used to model decay into the future (http://www.sciencedirect.com/science/article/pii/S1352231012007145  - you need to have an account to get access to the full paper in Atmospheric Environment). Modelling erosion rates into the future tends to use relationships derived from erosion data of small (50x50x10mm) stone tablets exposed in different environmental conditions. Using such data and regression analysis a statistical relationship can be derived between stone loss and changing environmental conditions. These relationships are often referred to as dose-response functions.


Two equations stand out – the Lipfert and the Tidblad et al. does response functions. Using these two equations for the decades 1980-2010, they predict an erosion rate of 15 and 12 microns per year as opposed to the measured losses on St Paul’s of 49 and 35 microns per year. The ratio between the measured and the dose-response erosion rates varies from 3.33 in the decade 1980-1990 to 2.75 in the decade 2000-2010, so fairly consistent. The difference between the two measures of decay may result from differences in what they are actually measuring. The dose-response function uses small stone tablets, exposed vertically in polluted environments. The weight loss of these tablets is measured and then converted to a loss across the whole surface of the tablet. The micro-erosion meter sites measure the loss of height of a number of points across the same surface on a decadal time scale. Both measures are changes in height but derived in different ways. What is important is that both methods indicate the same patterns of change in relation to declining sulphur dioxide levels. Both measures of erosion show a decline and both show it in the same direction and, by and large, in proportion to each other. Interestingly, when the dose-response functions are used to work out erosion on the cathedral since it was built the long-term erosion rate (as measured by lead fin heights relative to the stone surface) is only 2.5 times greater than that predicted by the does-response functions. This is a similar ratio, more or less, to those indicated over the last three decades.

The St Paul’s data does not imply that dose-response functions do not work – if anything it confirms the patterns in decay they indicate – but the St Paul’s data does suggest that using these dose-response functions to model decay into the future may require a correct function, equivalent to the ratio of about 2.5-2.75 to convert the losses to those that will be found on St Paul’s Cathedral.

Atmospheric Pollution and Stone Decay: St Paul’s Cathedral

I have recently published a paper with colleagues from Oxford, Cambridge, Sussex and York on a 30-year measurement of erosion rates on St Paul’s Cathedral, London (http://www.sciencedirect.com/science/article/pii/S1352231012008400  - you need to have an account with the journal to access the paper). Pleasingly, the academic work did get some press (http://www.independent.co.uk/news/uk/home-news/pollution-erosion-at-st-pauls-cathedral-in-record-300year-low-8205562.html , http://www.stpauls.co.uk/News-Press/Latest-News/St-Pauls-safer-from-pollution-than-at-any-time-in-its-history , ) and I even did a very short radio interview on local radio (distracted during it because they just started their afternoon quiz to which I knew the answer!)


The paper outlines how the rates of erosion (and the rates of surface change) of five micro-erosion meter sites around the cathedral have changed over the decades since 1980 and how this has mirrored a dramatic fall in pollution levels in central London.



Figure 1 Micro-erosion meter site with protective caps being removed for remeasurement

Figure 2 Erosion rates, rates of surface change and environmetnal variables for central London 1980-2010

The erosion rates have dropped since the closure of Bankside power station in the early 1980s with atmospheric pollution, as indicated by sulphur dioxide levels dropping from 80ppb in 1980 to about 3ppb in 2010. Erosion rates fell from 49 microns per year in the decade 1980-1990 to 35 microns per year in the decade 2000-2010. Erosion rates in the 1980-1990 were statistically significantly higher than erosion rates in both the decades 1990-2000 and 2000-2010. Erosion rate in the decade 1990-2000 and 2000-2010 were statistically similar. Although the decline in erosion rates was not as steep as the fall in pollution levels, erosion rates are now at a level that could be explained by the action of the acidity of ‘normal’ or ‘natural’ rainfall alone. ‘Normal’ rainfall is a weak carbonic acid, produced by the reaction of carbon dioxide and water in the atmosphere, meaning that ‘normal’ rainfall has an acidity of about pH5.6.

Erosion rates represent the loss of material from a surface but not all points’ measured lost material, some gained height over the measurement periods – this is surface change. Points can gain height for a number of reasons; slat in the stone could distort and push the surface up, lichens and bacteria could form crusts that raise the surface and eroded material might be deposited in depressions in the surface causing an apparent raising of the surface. The rates of surface change were 44 microns per year in the decade 1980-1990 but fell to 26 microns per year by the decade 2000-2010 (and were only 25 microns per year in the decade 2000-2010). This suggests that rats of surface change fell in a similar manner to rates of erosion and match the drop in sulphur dioxide levels as well.

Back in the 1980s the long-term rates of erosion, since the early 1700s were also determined using lead find. These lead fins were produced when the holes used to raise the stone blocks into the balustrade were filled with lead. Over time the fins became proud of the surface as the stone around them eroded. By measuring the height difference between the fins and the stone (and then dividing by the time of exposure) the long-term rates of erosion can be calculated. The long-term rates from 1690/1700 to 1980 were about 78 microns per year. This suggests that the cathedral has experienced much higher erosion in the decades before 1980. This does suggest that the erosion rates we have managed to measure from 1980 onwards, and the associated pollution levels, were not as damaging to the cathedral as those experienced in the years up to 1980.

The L’Aquila and Legal Protection for Scientists

Charlotte Pritchard’s recent BBC article (http://www.bbc.co.uk/news/magazine-20097554) raises an interesting question – should scientists stop giving advice and, if not, should they have professional indemnity insurance? This cover, as a lot of eager insurance website will tell you, is designed to protect professionals and their businesses if clients (or a third party) make a claim against them because they believe that they have suffered loss due to non-performance, breach of contract or professional negligence or all the above. Insurance cover is up to a maximum limit outlined in the policy and, presumably, based on the losses for similar types of professional activity in the past and the likelihood or probability of a claim being made against a particular type of professional. Such policies are standard tools of legal protection for architects, engineers, business consultants, insurance brokers (ironic?), solicitors, accountants and independent financial advisers. Pritchard points out that even the Met Office have a professional indemnity self-insurance fund in case a forecaster fails to predict a flood that results in the loss of life (how things have moved on since Michael Fish!)
Transferring this type of policy into the academic realm is not unusual, several of my colleagues have such policies when they undertake consultancy work and a lot of universities commercial branches offer such cover. A key question that should be asked is, is the nature of the information being provided the same for all these professions or is the scientific information of a different type? Is it the information that is the same or is it the intended use of that information that requires the provider to have legal protection? If I were an engineer advising on a building project, the audience, the investors, the builders, etc, employ me to ensure that the result is satisfactory – the building stays up (simplistic but you get the idea). There is a definite, time limited and clearly defined outcome that my advice is meant to help achieve. Is this the case for scientific advice about the possibility of a hazarduous event? Is the outcome clearly defined or is there variability in the expectations of the audience and those of the information provider? Aren’t experts in both cases offering their best ‘guesses’ given their expertise and standards in specific areas?
The development and (by implication from Pritchard) the almost essential nature of legal protection for people giving advice tells us a lot about current attitudes or beliefs about science and prediction. Pritchard quotes David Spiegelhalter, Professor of Public Understanding of Risk at the University of Cambridge as stating:

“At that point you start feeling exposed given the increasingly litigious society, and that's an awful shame…. It would be terrible if we started practising defensive science and the only statements we made were bland things that never actually drew one conclusion or another. But of course if scientists are worried, that's what will happen."

The belief that science can offer absolute statements concerning prediction underlies the issue at L’Aquila. Despite the careful nature of the scientific deliberations, the press conference communicated a level of certainty at odds with the understanding of seismic activity and at odds with the understanding of the nature of risk in seismology. Belief that seismic events are predictable in terms of absolute time and location is at odds with what is achievable in seismology. By extension this view believes that scientists understand the event and understand how it is caused and so understanding causation leads to accurate prediction. This ignores the level and nature of understanding in science. Scientists build up a model, a simplification of reality, in order to understand the events, the data that they collect. This model is modified as more information is produced but it is never perfect. The parts of the model are linked to one and other by the causes and processes the scientist believe are important, but these can be modified or even totally discarded as more events, more information is added. So it is feasible to understand, broadly, how seismic events occur without being able to translate this understanding to a fully functional and precise model of reality that can predict exactly when and where an earthquake will occur.

If scientists are held legally to account for the inexact nature of the scientific method then there are major problems with any scientist wanting to provide any information or advice to any organisation.

Communicating the inexact nature of our understanding of reality, however, is another issue. If the public and organisation want accuracy in predictions that scientists know is impossible then the ‘defensive science’ noted by Spiegelhalter, will become the norm in any communication. Bland statements of risk of an event will be provided and, to avoid blame, scientists and their associated civil servants will always err on the side of caution, i.e. state a risk level that is beyond the level they would state to colleagues. Even this type of risk communication carries its own risks – stating it will rain in the south of England on a bank holiday could deter visitors to the seaside and when that happens then couldn’t business in coastal resorts sue or provide their own information (Bournemouth launches own weather site - http://news.bbc.co.uk/1/hi/england/dorset/8695103.stm )? If the reports conflict then who should the public believe?

Bland science implies communication that scientists perceive to be of least risk to them personally. This could vary from person to person and from institution to institution so the level of ‘risk’ deemed acceptable to communicate as ‘real’ to the public will begin to vary.

There is no easy answer to this issue and whilst there isn’t then legal protection sounds a reasonable way to go if you want to make your scientific knowledge socially relevant. It may, however, be worth thinking about the ideas scientists try to transmit as messages. Three simple things then spring to mind: what is the transmitter? what is the message and what is the audience? The scientist (transmitter) will have their own agenda, language and views on the nature of the message. The message itself will be communicated in a specific form along specific channels, all of which can alter its original meaning or even shape its meaning. Likewise, the audience is not a blank, passive set of receivers – they have their own views, agenda and will interpret the message as such. More time spent understanding how the scientific message is communicate may help to ensure that the message is interpreted by the audience(s) in the way the scientist intended.

Friday, October 26, 2012

The ‘Michael Fish Effect’: The L’Aquila Case and Expectations of Science


On 15th October 1987 weather presenter Michael fish, dressed in the loud-ish tie and bland suit typical of the 1980s, delivered one of the most infamous statement in British science – Earlier on today, apparently, a woman rang the BBC and said she heard there was a hurricane on the way... well, if you're watching, don't worry, there isn't!’ - hours later the most severe storm since 1703 hit the south-east of England killing 18 people. Although Michael Fish claims to have warned viewers to ‘batten down the hatches’ later in the report, the story of this error of scientific prediction have gone into meteorological folklore. Michael Fish, however, was not blamed for the storm, for its misprediction or the deaths that followed, in fact the whole incidence has taken on a mocking and good-humoured tone that has kept Michael Fish fondly in the public memory. The recent trial in L’Aquila has drawn comparisons with Michael Fish’s pronouncement, but the comparison is not that simple.

The trial and sentencing of seven Italians (six seismologists and a civil servant) by an Italian judge on 22nd October for ‘crimes’ in relation to the L’Aquila earthquake in on 6th April 2009 has, rightly, sent shockwaves around the scientific world (http://www.guardian.co.uk/science/2012/oct/22/scientists-convicted-manslaughter-earthquake, http://www.bbc.co.uk/news/world-europe-20025626). The six-year jail sentences (although no sentence will be implemented until at least one appeal under Italian law) are for the crime of multiple manslaughter. The verdict was reached by after a trail by judge (rather than a jury trial) decided after only 4 hours of deliberation that the men were guilty of providing "inexact, incomplete and contradictory information" about whether the small tremors felt in the area some weeks and months before the larger earthquake were the basis for an earthquake warning. The implication of society persecuting science has even drawn parallels with the trail of Galileo in 1633 (http://www.guardian.co.uk/science/across-the-universe/2012/oct/24/galileo-laquila-earthquake-italian-science-trial).

Behind the initial shock, however, the verdict can also be viewed to be about a failure to communicate risk rather than a failure of science to be able to accurately predict an event (http://www.newscientist.com/article/dn22416-italian-earthquake-case-is-no-antiscience-witchhunt.html). The prosecution in the case made it clear that the accusations were about poor communication of risk and not about science as such. The New Scientist article makes it clear that the communication of the risk was left to a civil servant with no specialism in seismology. His statement that:
 "The scientific community tells us there is no danger, because there is an ongoing discharge of energy. The situation looks favourable." 
should have rung alarm bells with the seismologists on the Major Hazard’s Committee but none were present at the press conference to correct this simplistic (and potentially incorrect) statement.

Nature offers an even more detailed analysis of the miscommunication of science involved in the statement issued by the civil servant (http://www.nature.com/news/2010/100622/full/465992a.html). Nature looked at the minutes of the meeting held between all seven men on 31st March 2009. In the meeting, none of the scientists stated that there was no danger of a large earthquake nor did they state that a swarm of small quakes meant there would not be a large one (or that there won’t be one). The prosecution claimed that the statement at the press conference persuaded a lot of people to remain in their homes who would otherwise have left the region, hence the charge of multiple manslaughter.

The whole case highlights, for me, two important issues. Firstly, what are the expectations of science from the public, from government and from the scientists themselves. The minutes of the meeting make it clear that the scientists put over views couched in clear scientific terms of uncertainty and unpredictability about seismic events. Uncertainty and unpredictability are commonplace in science. Recognizing the limits to what we know about the physical environment and how this impact on our ability to model what little we do know to produce predictions is an important aspect of science. Is this acceptance of our ignorance and inability what decision makers or the public want to hear in a crisis situation? Is the image that these groups have of science a bit different from that scientists have? The expectation of certainty, of clear yes or no answers to specific questions seems to be an expectation that science can not fulfill  The statement of the civil servant may reflect his interpretation of the committee discussion but the terms use are ones that reveal a desire to communicate certainty, a quality science by its very nature can not provide.  Science does not work by finding the truth but rather by eliminating the false. This is a long and painful process of rejecting errors and accepting, for the time being, whatever ideas are left even though you know that one-day these ideas themselves may be altered or rejected in the light of new evidence.

Secondly, the trail highlights that scientists should realise that they work in society and society has expectations of them. Leaving communication of a complex and inconclusive discussion to a civil servant may have seemed appropriate to the scientists but it also implies a view that it was not their responsibility. Sitting on a national committee such as the Major Hazards Committee not only means that you are a highly respected member of the scientific community, it also means that you believe there is a social benefit to be gained from your knowledge. The social benefit is drastically reduced if you are unable to communicate effectively to a vital audience, the public. Assuming that you do not have to work at communicating this knowledge or that it will be communicated accurately for you is delegating responsibility for your views to someone else. In this case delegation means loss of control of your views. Science is difficult and science is complex but just saying this does not mean that you should not try to communicate the issues involved in a complicated subject such as seismology. You may not be providing the answers people want to hear but then again you are not providing them with simplistic and wrong answers. At least Michael Fish didn’t rely on someone else to communicate his mistake - it was all his own work - just like his tie. 

Wednesday, August 22, 2012

Global Death Toll From Landslides

A recent paper by Dave Petley in the journal Geology takes data from across the globe to quantify and map the spatial variation in deaths caused by landslides in terms. The data is for non-seismic landslides between 2004 and 2010, over 32,000 deaths and over 2,500 landslides. A number of ‘hotspots’ are identified including the Indonesia, southern and eastern coastal region of China and central China (Sichuan basin). The report is useful in identifying recent patterns in deaths and so, by implication, landslide risk. Dave Petley suggests that:


"Areas with a combination of high relief, intense rainfall, and a high population density are most likely to experience high numbers of fatal landslides"

For information on the paper go to the BBC report or to Dave's landsldie blog

Urban centres and flood risk

A recent BBC article (Shanghai ‘most vulnerable to flood risk’) reports on a paper published in Natural Hazards by a team of researchers. The paper ‘A flood vulnerability index for coastal cities and its use in assessing climate change impacts’ in the journal Natural Hazards by Balcia, Wright and van der Meulen, follows in a tradition of trying to quantify risk using a set of key variables. (I think the paper is on open access so you should be able to read it via the link). The authors develop what they call the Coastal City Flood Vulnerability Index (CCFVI) that is composed of three parts: the hydro-geologic, the socio-economic and the politico-administrative. These parts represent the three key interacting subsystems that affect coastal flooding, the natural subsystem, the social-economic subsystem and the administrative and institutional subsystem. Within each of these the authors identify variables that indicate the degree of exposure to hazard, the susceptibility to the hazard and the resilience to the hazard. The hydro-geologic part only has indicators of exposure whilst the ‘human’ parts have indicators of all three.
Exposure is defined as the predisposing of a system to be disrupted by a flood event due to its location. Susceptibility is defined as the elements exposed within the system that influence the probability of being damaged during the flood event. Resilience is defined as the ability of a system, community or society to adapt to a hazard. This term is assessed through political, administrative, environmental and social organisational evaluation. Variables selected include sea-level rise, storm surge, number of cyclones in last five years, river discharge, foreshore slope, soil subsidence for the hydrogeologic subsystem. For the socio-economic subsystem the population close to the shoreline, the growing coastal population as well as cultural heritage are included as exposure factors whilst uncontrolled planning zones are an exposure variable for the political and administrative subsystem.  Susceptibility variables include the percentage of the population disabled or young or old and flood hazard maps. Resilience variables include shelters, level of awareness, institutional organisations and flood protection.
The paper carries out a detailed analysis of each subsystem and then combines the indicators into a single equation to determine overall vulnerability.  The selection of variables is well argued and the complexity and issues of using such indexes is discussed well, so the authors do not have a simplistic interpretation of hazards and vulnerability. Any paper that tries to squeeze and freeze the complex and dynamic concept of risk into a single index will always have the problem of simplification. Simplification, not only of the subsystems but also of the interpretation by others of the index itself.
The variables selected may reflect the data readily available plus a particular view of how the flood hazard should be alleviated. The focus on institutional organisations as resilience does imply a rather hierarchical view of hazard management and prevention (maybe a valid argument with a set of large urban areas with low social cohesion). Interpretation of the index, as in the BBC report, tends to focus on the final product rather than on the variables used in its construction and the ratings of the subsystems. Discussions could be made as to the appropriateness of the same variables for cities across the globe or for the selection of those variables anyway. Looking in detail at the breakdown of the index, it is clear that Shanghai is the most ‘vulnerable’ city on variables used to determine the hydro-geologic subsystem because of its high length of coastline and high river discharge (plus high soil subsidence). Manila, however, is ranked second because of its exposure to tropical cyclones and flooding – can both the same index combine both types of exposure? Does this mean that Manila is a more vulnerable location, as tropical cyclones are more frequent than high discharges? Can degrees of difference in vulnerability or rather exposure be assessed using a combined index? Shanghai is not, however, the top ranked city for all subsystems. For the economic variables, Shanghai is ranked fourth meaning that it is likely to recover quickly, economically at least, from the affects of a flood event.
An index like the one presented in this paper are very, very useful. They can be used, as the authors have done, to try to predict how changes in climate could impact on hazards and as such can be of great use in planning and management. A single index should, however, be used with caution, particularly if the choice of variables reflects a particular view of hazard management. Similarly, understanding how the index is constructed and how different parts of the index contribute to the whole is vital in understanding where vulnerability (and resilience) lie and how these might be improved.


Friday, July 27, 2012

Public Risk Communication

Communicating risk to the general public is a vital task in managing risk. The UK government has produced a leaflet outlining a ‘Practical Guide to Public Risk Communication’. The leaflet is a very short guide to practice. The three key aspects of risk communication are to reduce anxiety around risks, to manage risk awareness and to raise awareness of certain risks. There are 5 key elements to public risk communication: assembling the evidence, acknowledgement of public perspectives, analysis of options, authority in charge and interacting with your audience.


Each of these elements is expanded and discussed via a set of questions that organisations should consider. The first element is concerned with establishing the nature of the risk and its magnitude and demonstrating a credible basis for the position taken by the orgnaization. Evidence is paramount in this element but also, implicitly, is the question of trust or belief in the evidence. Who provides the evidence and the basis of that evidence are as important as clearly articulating the risk. As part of this aspect of trust, the question of ambiguity and uncertainty is bound to arise. Despite what politicians may wish, science is inexact and often searches for and is ladened with uncertainty. How organisations deal honestly with uncertainty can have a huge bearing on the trust they emit and retain between hazardous events.

Understanding how the public understand risk is essential to getting the message across. Lumping everyone in as ‘the public’, may not be that helpful though, as the leaflet notes. Assuming everyone perceives a hazard in the same way and in a consistent manner may be hoping for too much. The leaflet uses the term ‘risk actors’ as a coverall term for individuals or groups who engage with a risk or who influence others approaches to and understanding of risk. Perception and so the message about risk should be differentiated but that differentiation may depend upon the exact mix of risk and risk actors. An issue I would like to raise here is whether once you are a risk actor you are no longer a member of the public? A home-owner who has recently experienced a flood may be much more active in their local community in taking steps to reduce the flood risk – does that make them a risk actor, a well-informed member of the public or a member of the public and does this matter fro how risk is communicated to them? Risk actors could be viewed as being bias, of having their own agendas, and so not really reflective of the views of the general public.

Analysing options suggests that organisations have rationally weighed the risk and benefits of managing public risk as well as the options for action available to them. The last sentence is interesting ‘the technical and societal interests will need to be reconciled if the solution is to be generally accepted.’ This sentence is made in relation to technical solutions that may no have public sympathy initially. The implication that through debate these solutions can be accepted implies a very hierarchical and technocratic view of hazards and risk management – the public have to be educated and brought to accept the solution. I maybe reading this aspect too cynically but may be not.

The authority in charge section, however, adds weight to the idea that this is a technocratic and hierarchical view of hazards (maybe not a surprise in a government publication!) The need for an organisation to determine if it is appropriate for them to step into manage risk and the clear limits of responsibility to risk management imply a very structured and ordered view to how to manage the world and associated risks. A telling point is made that exercising authority now is as much about building and maintaining trust as it is about lines of formal authority. Trust, or rather the perception of trust, will dramatically affect the ability of an organisation to manage risk in a sceptical society. The last sentence states ‘Organisations that are not highly trusted will increase the chances of success by enlisting the help of other organisations – such as independent scientific organisations – who have the confidence of society’. A call to collaboration or a call to objectivity?

So is it all ‘smoke and mirrors’ or does this leaflet help to further risk management? Without a doubt communicating risk and its management effectively to different audiences is essential. The leaflet does provide some very good guidance on this. The ideas should not, however, be used uncritically as they are designed with a very technocratic and hierarchical view of risk management in mind. Examples of public risk communication are also provided and for flooding the conclusions reflect this bias (starts on page 22 of the report). Risk quantification is sought for flood hazard, as are software and tools for aiding local risk planning and managing the possible clash between the expectations of more flood defence infrastructure and the new risk-based approach (risk is the focus rather than the hazard itself). Communication about risk and its management is viewed as coming from the Environment Agency , insurance companies and DEFRA – not much about local ideas or communication from the ground up! The link itself is, however, viewed as potentially corrosive to public trust. This hits at the nub of the issue, the government wants the trust of people to be able to act in a manner it views as appropriate. Actions, of necessity in a complex society, require working with organizations with their own agendas and this creates suspicion. Can risk ever be managed without trust and can you manage anything without eroding some degree of trust somewhere?



Urban Air Pollution Around the World

Two interesting blogs and a website for all issues cocnerning atmospheric pollution can be found at urbanemission.blogspot.co.uk/ , aipollutionalerts.blogspot.co.uk/ and at urbanemissions.info/ . All of these sties are run by Sarath Guttikunda from New Delhi. An important aspect of these sites are reports from all over the globe concern atmospheric pollution in urban areas. The reports highlight that atmospheric pollution is a global issue, a world-wide problem that needs action. Taking action, however, requires information that can inform decision-makers about the extent of the problem. These sties also provide this by linking through to monitoring information from urban areas from around the world. These sites also highlight that local populations, communities and neighbourhoods, are not just passively sitting there waiting for decision-makers to make decisions. The volume of reports and community awareness show the concern and impetus for changes driven by the local-level certainly exists. Enabling those changes is another issue that is dependent on local conditions and their context, political, economic and social. The information and data provided by these websites, however, does permit individuals and communities from all over the world to compare their conditions with others in similar circumstances and to exchange ideas and plans for pressuring decision-makers for change.

The Urban Emissions website is worth a look as well for the modelling tools that are available for download. An interesting one, given my last blog, is the Air Quality Indicator download. This simple calculator helps you work out the air quality for an urban area based on daily observations or modelled values. It does, of course, assume that the data will be available in the first place!

Understanding Daily Air Quality

Atmospheric pollution is a continuing environmental problem across the globe. Within the UK data on historic pollution levels as well as current pollution levels can be found at DEFRA Air Quality site. A great store of information and one that you can download data from.

Wonderful as this source is for research, atmospheric pollution is not a problem that has past or is under tight control on a global scale. The locations of UK data reflects the monitoring networks set up in the 1960s by Warren Springs Laboratory, largely in response to the Clean Air Act (1956) and the need to monitor levels of pollutants to ensure that standards were being meet. The early monitors tended to be instruments such as the sulphur dioxide bubbler (so old I couldn't fidn a photo of it on the Web!). Air was pumped into the machine at a known rate and the atmosphere reacted with the liquid as it bubbled through the machine. After day the flask with the liquid in was removed and replaced with another flask of liquid. The liquid from the previous day was analysed using titration techniques (reacting the sulphur dioxide with another chemical to get a colour change and a reaction product that could be accurately measured) to determine the levels of sulphur dioxide (once the various calibrations and calculations had been done). I know this because I used an old bubbler in my thesis to monitor sulphur dioxide levels on the roof of the Geography Department at UCL, London. It was educational, but was a pain to have to process daily, particularly as it was just self-taught me undertaking the titration much to the amusement of colleagues in the lab. Passive monitors such as nitration tubes (they just sit there and the pollutants react with them) were also used, but still needed chemical post-processing to obtain a result.

By the time I finished my thesis in 1989, real-time monitoring of pollutants, or at least hourly averaged and then 15-minute averaged values, were becoming more usual and replaced the daily averaged data. This is great for monitoring levels virtually continuously and for identifying specific pollution episodes but how much information is there and how can you interpret it? Air quality standards have varying monitoring levels for different pollutants and even the same pollutant can have different exceedence values. Sulphur dioxide levels in the UK, for example, should not exceed 266 mirocgrams/m3 more than 35 times per year if measured as averaged 15-minute concentrations. If measured as 1 hourly means then 350 micrograms/m3 should not be exceeded more than 24 times per year. If measured as a 24 hour average then 125 micrograms/m3 should not be exceeded more than 3 times a year. So the limits change with the monitoring period and the type of equipment being used to monitor pollution levels. This variation may begin to get confusing if you try to communicate it to too many different end-users.


A simplified version, the Daily Air Quality Index, recommended by the Committee on the Medical Effects of Air Pollutants (COMEAP) uses an index and banding system with bands numbered1-10 and colour coded for the level of severity of atmospheric pollution. The scale mirrors established ones for pollen and sunburn so draws on existing public understanding of colour and levels. Bands 1-3 are green, so represent low atmospheric pollution levels, 4-6 are shades of orange and represent moderate atmospheric pollution levels, 7-9 are darkening shades of red ending up at brown and represent high atmospheric pollution levels, whilst the oddly coloured purple band of 10 represents very high levels of atmospheric pollution. The index itself combines the highest concentrations for a site or region of five key pollutants: nitrogen dioxide, sulphur dioxide, ozone, PM2.5 and PM10.

The DAQI may be a useful tool for communicating information about the general level of pollution as it relates to human health but does the simplicity mask complexity that disaggregated data would not? The relative contribution of the five pollutants to the index can be gauged by the information on each at the DEFRA website. PM2.5 and PM10 uses 24-hour running mean concentrations and have specific threshold levels fro each band, whilst sulphur dioxide is measured as 15-minute averaged concentrations and, again, has threshold values for each band. The index itself, though, hides if all or just one or two of the pollutants push the DAQI into a band. The index misses other pollutants that could impact upon human health, even though these may be monitored such as benzene. The cocktail of pollutants used to create the index also reflects a specific context, the UK, would the cocktail of significant pollutants vary with other contexts? The cocktail and the monitoring intervals are not necessarily ‘natural’ ones – they have been developed from monitoring set up for other purposes such as regulatory requirements. The index is squeezed out of what exists.


The DAQI is a very, very useful tool, but it reflects an attempt to communicate complex and huge volumes of information in a simplified manner that, the makers believe, will be of use to specific end-users. Once data is compressed and simplified you are bound to loss some of the information contained in its variations and detail. The index you develop for targeted end-users will, of necessity, exclude a lot of the information you have collected and it is just useful, for the end-users in particular, to be aware of this.





Wednesday, July 25, 2012

Beijing Air Quality – Citizen-Science Approach to Mapping Levels?

A recent article in Environmental Technology Online reports on a community-based science project called ‘Float’ that is actually part-science and part-art project. The idea is that pollution-sensitive kites will be flown over Beijing. These kites contain Arduino pollution-sensing modules and LED lights and will indicate levels of volatile organic compounds, carbon monoxide and particulate matter by changing colour to green, yellow or red depending on the pollutant levels. The kites are attached to GPS device loggers and the real-time data website Cosm.

The project is designed by students Xiaowei Wang from Harvard’s Graduate School of Design and Deren Guler from Carneige Mellon and is designed to involve local residents in data collection. The project relies on public funding and is still raising funds. The project derives its funds from Kickstarter, a website devoted to creative projects and obtaining funding for such projects (The Float project on Kickstart). The project also has funding from the Black Rock Arts Foundation and the Awesome Foundation.

The project has generated a lot of interest on the Web:

Fighting China’s Pollution Propaganda, with Glowing Robot Kites For the People

Pollution-detecting kites to monitor Beijing's air quality
Glowing Pollution Sensor Equipped Kites Replace Beijing's Stars
Kickstarter Project Plans to Measure Beijing Pollution Using Kite Sensors

Only a couple of comments and an expression of interest in the results really.


The project is undoubtedly part of the growing and, in my view, superb trend towards more inclusive community or participatory science (choose whichever term you prefer, Guler uses citizen-science). The ideal of getting local communities involved in the data collection as well as involving them in all aspects of the research process is an excellent way to raise awareness of an issue as well as educate people about the scientific approach and its problems and potentials. The Float project has involved local communities, young and old, from the start with workshops in Beijing and as well as in the design of the kites. In terms of how to organise a community-based, participatory science project it is one that I will advice my students to look at. It is just a shame that the descriptions of the project veer from highlighting the science to highlighting the arts aspects as if the two are, or need to be, distinct. It should also be remembered that this project, as any project involved in monitoring pollution, is entering the political as well as the scientific arena. Involving local populations is a political act (as is their agreement to involvement) as much as the monitoring of pollution by the American Embassy or the siting of monitoring sites by the Chinese. Local is as political as the national or international, but the nature of the act does not necessarily mean the data is political bias only that data collection is for a purpose.

As with most community-based projects, however, there is the issue of belief, trust or confidence in the data collected. These projects do tend to illustrate quite nicely the continuing divide between the ‘specialist’ or ‘expert’ and the ‘public’ (I would say amateur, but much of British science in the nineteenth and early twentieth century only developed because of amateurs!) The expert has been trained and accepts certain methods as being appropriate for data collection. Control and standardization are essential in ensuring what is termed ‘intersubjectivity communication’ between researchers – basically it means  I know what you did because that is how I was trained to do it, so I trust your data as being real. Guler seems to downgrade the status of the data collected even before the project really begins by stating:

‘We’re trying to interact with people on the street and see what they’re tying to do with the information they see. I don’t plan to argue that this is the most accurate data because there are many potential reasons for differences in air quality reports. We want to just keep it up, upload the data, and focus on that more after we come back’.

My impression is this statement is a great get-out clause for ‘official’ monitoring be it by the Chinese or atop the American Embassy. I wouldn’t’ be so pessimistic. The aims of the project in terms of improving public understanding of air pollution, its impact on health and the visualization of pollution through the kites are all excellent and likely to be successful. The data collected is also of value. The ‘official’ pollution monitoring sites probably conform to national or international standards for static sites in terms of equipment and monitoring periods. The kite data does not necessarily provide comparable data to these sites. The kites are mobile and collect data on levels that can be spatially references (I assume in 4 dimensions). They provide a different perspective on atmospheric pollution rather as a spatially altering phenomenon, something the official monitoring sites can not provide.  It could even be argued that the kite data provides information on pollution as experienced by the population (although the population is unlikely to move across the sky at the height of the kites!) The important thing to remember is that there is not one, single correct measure of atmospheric pollution; there are merely different representations of atmospheric pollution. The official static sites have the advantage of having clearly defined protocols that ensure the data or information they collect is immediately comparable with data or information collected at similar monitoring sites globally. The Float project is generating a different and novel set of data or information. This may require a different approach to thinking about the information and its interpretation (Guler seems to suggest this with some hints at triangulation of trends) and in how confidence or belief in the information is assessed either qualitatively or quantitatively. I will be very interested to see what form the results and interpretation takes. Good luck with the project!

Institute of Hazard, Risk and Resilience at the University of Durham

An extremely useful website is the Institute of Hazard, Risk and Resilience located at the University of Durham. They have just published their first on-line magazine, Hazard Risk Resilience that outlines some key aspects of their research and is well worth a look (also their blog now linked at the side of my blog). In addition the site contains podcasts on aspects of hazards.


An important research project for the Institute is Leverhulme funded project on ‘Tipping Points’. Put simply ‘tipping points;’ refer to a critical point, usually in time, when everything changes at the same time. This idea has been used in describing and trying to explain things as diverse as the collapse of financial markets and the switches in climate. The term ‘tipping point’ (actually ‘tip point’ in the sutdy) was first use in sociology in the 1957 by Martin Grodzins to describe the ‘white-flight’ of white populations from neighbourhoods in Chicago after a threshold number of black people moved into the neighbourhood. Up to a certain number nothing happened, then suddenly it was as if a large portion of the white population decided to act in unison and they moved. That this action was not result of co-ordinated action on the part of the white population suggested that some interesting sociological processes were at work. (Interestingly, I don’t know if the reverse happens or if research has been conducted into the behaviour of non-white populations and their response to changing neighbourhood dynamics). Since about 2000 the use of the term tipping point has grown rapidly in the academic literature, a lot of the use being put down to the publication in 2000 of ‘The Tipping Point: How little things can make a big difference’ by the journalist Malcolm Gladwell (who says academics don’t read populist books!)

Research suggests that the metaphor of a ‘tipping point’ is a useful one in getting across a lot of complex and complicated processes and changes that occur in socio-economic, political and physical systems. One of the foci of research in the project is on trying to assess if this metaphor does actually describe, quantitatively or qualitatively or both, real properties of systems. Another focus of the research  are concerned with exploring how the metaphor becomes an important aspect of the phenomena being researched, even taking on the character of an agent in the phenomena itself. Importantly, the project also considers what it means to live in a world where ‘tipping points’ abound and how important anticipatory understanding is for coping wit that world.

Tipping Point by Malcolm Gladwell





Virtual Water: Accounting for Water Use

Following on from my last blog on the discovery of a massive aquifer under part of Namibia I thought it might be useful to consider a key accounting concept for water resources: virtual water. The term ‘virtual water’ was coined by Tony Allen of SOAS and and refers to the invisible water, the water that it takes to produce the food and goods we consume or as Virtual Water puts it


Virtual water is the amount of water that is embedded in food or other products needed for its production.’

(Other websites on virtual water include: Virtual Water)
Some of the figures involved are quite amazing. A kg of wheat takes 1,000 litres of water to produce, a cup of coffee takes 140 litres, 1kg of rice takes 3,400 litres and 1 car takes 50,000l litres of water to produce. You can even work out your own water footprint using the calculator on the site. There is even an app for your mobile! Additionally there is information on national water footprints and, importantly, the idea that virtual water is traded between nations.

Mekonnen and Hoekstra (2011) at the University of Twente published a UNESCO report on the issue of virtual water over the period 1996-2005. They divided virtual water into three colours: green, blue and grey. Green water is the water associated with agricultural production, blue water is that associated with industrial production whilst blue water is the water associated with domestic use. From their analysis they calculated that the global average water footprint for consumption was 1385 m3 per year per capita, with industrialized countries having a footprint of 1250-2850 m3 per year per capita, whilst developing countries had footprints in the range 550-3800 m3 per year per capita. The low values represented low consumption volumes in these countries, whilst the large values represented big water footprints per unit of consumption.

Their key conclusions were that about 20% of the global water footprint for that period was related to production for export. This means that there are large international flows of virtual water with some countries importing virtual water (2320 billion m3 in the time period). In countries where water is scare there is a tendency to import large amounts of virtual water in food and agricultural products, saving national water resources for key uses that add greater value such as industrial production. Tony Allen makes an argument in 1997 for the Middle East being one of the first regions to develop this adaptation to resource scarcity. The relatively large volume of international flows in virtual water generated water dependencies that they suggest strengthen the argument that issues of local water scarcity need to be considered within a global context.


The significance of this concept for the discovery of the aquifer and its use is that the Namibian reserve has to be viewed within a global context. The development of agriculture and the technical development of the resource are likely to be political decisions and increasingly likely to be geopolitical decisions that have to take into account the regional position of Namibia, the likely trade partners for the virtual water, the geopolitical power of potential partners and the future frictions that could arise as environmental change affects the current international demands and flows of virtual water.

Tuesday, July 24, 2012

Namibian Aquifer: Who Benefits?

A recent BBC article reported on the discovery of major aquifer in Namibia. The new aquifer is called Ohangwena Il and it covers an area of about 70x40km (43x25 miles). The project manager, Martin Quinger, from the German federal institute for geoscience and natural resources (BGR) estimates that the 10,000 years old water could supply the current water consumption needs of the region for about 400 years.


The find could dramatically change the lives of the 800,000 people in the region who currently rely on a 40 year old canals for their water supply from neighbouring Angola. Martin Quinger states that sustainable use is the goal, with extraction ideally matching recharge. The easy (and cheap) extraction of the water under natural pressure is complicated by the presence of a small salty aquifer that sits upon the top of the newly discovered aquifer. Quinger states that if people undertake unauthorised drilling without following their technical recommendations then a hydraulic short-cut could be formed between the two aquifers contaminating the fresh water.

In terms of the use of the water he comments that:
‘For the rural water supply the water will be well suited for irrigation and stock watering, the possibilities that we open with this alternative resource are quite massive’.
The EU funded project also aims to help young Namibians manage this new water supply before their funding runs out.
The discovery is a great, potentially life-changing resource for the region but the question that arises in my mind is who is going to benefit from this discovery? The current socio-ecological system in the region is attuned to the amount of water available. The availability of more water could change this but will it be for the benefit of the current population? A key aspect is the last point made about the EU funded project – the management of the resource by those in the region. The skills required to manage a large water resource are context dependent. They depend on the uses to which that resource will be put. They require technical and resource allocation skills that presume a context of educational levels that are embedded within a culture and location. Acquiring these skills takes a while as people go through the appropriate training and gain the experience that helps management of this resource. If this expertise does not exist now within the region then the implication is that external support will be needed and, by implication, paid for.

Another issue is the assumption that the water will be used for improving agricultural production which I assume (maybe wrongly) means more intensive agriculture. The key questions are then what type of agriculture and what additional resources are required to ensure that it works? Thinking of the whole agricultural system as a complex network of relations, the question really is what network of relations will be overlaid onto the existing agricultural network to ensure the success of the new type of agricultural production. A more intensive agriculture implies fertilizers, investment, technical know-how as well as access to markets, regional, national and international, so that funds can be extracted from the new produce. Again is it likely to be the regional population that is able to conjure up the finance, technical knowledge and all the other bits of the network required to develop this new agriculture? In time, the answer might be yes, but will external agencies, such as the government and investors permit this time before developing the valuable resource?

This problem with development as seemingly envisaged by the project is illustrated in the comment concerning extraction. The implication is that only people with a specific level of technical ability can extract the water. This implies that a system of permits is likely to be implemented and so access to the resource will be controlled and restricted. It also implies that the permits will be allocated to operators able to meet the technical requirements outlined by the project, and if this expertise does not exist within the region then the operators will have to be external contractors. This system is likely to require financing so value will have to be extracted from the supply of water. To who will the funding flow and who will pay for it? How will the regional population receive the water and what will the price of the water? I would like to hope that the EU funded project will enable the management of the resource by the regional population for the benefit of that population in the manner that that poplulation sees fit for their own development.

Sunday, July 22, 2012

Disaster Database

EM-DAT is an international disaster database run by the Centre for Research on the Epidemiology of Disasters (CRED). This Emergency Events Database provides data on the location, size and impact of different types of disaster from 1900 to the present (about 18,000 disasters in total and counting). For educational and investigative purposes a useful feature is the ability to create your own dataset based on your own criteria. You can select regions or countries, specific time periods and specific types of hazards to develop your customized set of data. An interest aspect of the database is that ‘technological’ hazards are included as well as ‘natural’ hazards, so an impression of different types of hazard can be built up. Hazards are divided or defined hierarchically by disaster generic group (natural or technological), subgroup, main type, then sub-type and a sub-sub type! It is not as complicated as it sounds! The impact of each hazard is defined in terms of number of people killed and injured, the number of people made homeless and those requiring immediate assistance after the disaster event as well as the estimated damage caused by the disaster event.
There are also a number of pre-prepared maps and graphs of the location of disasters on a global scale and of trends in disasters. The map below shows flood disasters between 1974 and 2003, which the graph illustrates the trend in the number of technological hazards since 1900. This database could be a very useful tool for exploring trends and patterns of disasters through time at difference scales. The reasons for these trends may require some thinking – increased reporting of disasters, increasing population growth and spread of population into hazardous areas, increasing growth of vulnerable populations to name but a few. Without the initial data through even identifying these patterns to explain would be difficult.

Flood disasters 1974-2003
Trend of numerb of technological disasters since 1900


Saturday, July 21, 2012

More Mash-ups: Mapping A Century of Earthquakes

A recent posting on the AGU linkedin site drew my attention to a map tat plotted all magnitude 4 and above earthquakes that have occurred since 1898. The map in the Herald Sun clearly shows the distribution and the ‘hotpots’ you might expect around the Pacific ‘ring of fire’ as well as some intra-plate bursts of colour that suggest even the interior of continents are not immune from these hazards.
Although a nice image, the map represents a key trend that I mentioned in a earlier blog – mash-ups. The map was produced by John Nelson of IDV Solutions  a US software company specialising in visualising data. The maps combine data from the US Advanced National Seismic System and the United States Geological Survey to produce a map that spatially locates each piece of data. IDV Solutions understand the importance and power of such mash-ups and Deborah Davis published an article in Directions magazine (25th February 2010) on the importance of mash-ups for security. Although their observations about mash-ups are directed at security the observations in the articles are as useful for trying to understand and manage hazards and the risks associated with them.

Mash-ups provide a means of consolidating data from diverse sources into a single, comprehensible map and in a visual context that has some meaning for the observer. The map produced can be made relevant to the customer or user by ensuring that it contains additional information relevant to their interpretation of the information. A map of landslides combined with topographic data provides a context for helping to understand why the landslides might have occurred. Adding surface geology as another layer improves the context of interpretation for a landslide specialist, adding the road network improves the context of interpretation for a hazard manager. Once data has a context it is easier to spot relationships between phenomena. With this single, common map available to all parties there is a common basis for discussion and for decision-making. Having a common source of reference may even encourage discussion and debate. In addition, it may be easy to see where data is lacking and what other data these parties may require to aid their decision-making. The cost-effectiveness of such mapping should not be neglected either. Using existing data and producing a new product is very cost-efficient.



Media and Hazards: Orphaned Disasters

An interesting blog published in 2010, Orphaned Disasters: On Utilising the Media to Understand the Social and Physical Impact of Disasters’ poseted by KJ Garbutt, looks at how the media views, priorities and ignores disasters producing what the author calls ‘orphaned disasters’. It has some interesting points to make. The blog has a link to Masters research undertaken at the University of Durham on which the blog is based.




Sunday, July 15, 2012

Surface Water Flows and Flooding

The recent ASC report on flooding and water scarcity makes some interesting points about surface water flows, particularly those associated with flood hazard in urban areas. The report states that ‘Every millimetre of rainfall deposits a litre of water on a square metre of land’. Water falling onto a paved, impermeable will not infiltrate into the ground and so the volume has to move somewhere. The amount of paved surface is increasing as the report noted that green spaces in urban areas have been paved over and so surface water flows in urban areas are increasing even before the more intense rainfall associated with climate change is considered. The figures cited are that the proportion of paved gardens has increased from 20%in 2001 to 48% in 2011 of the total garden area of 340,000 hectares.

To combat this increase in paved area contributing to runoff the report suggest that urban creep should be minimized, sustainable urban drainage (SUDS) (SUDS) (SUDS-EA) should be improve to slow down water flows and store water above ground, and that conventional sewers should be maintained or upgraded: all good ideas. Recent floods in urban areas have highlighted the importance of such measures. Paved surfaces permit no storage of water, runoff is almost immediate and, with intense rainfall, the volumes of runoff involved can be huge within a short period of time. Overwhelmed urban drainage systems mean that the water moves rapidly across impermeable surfaces and flows through streets and roadways using them like predefined river channels. Similarly, when a river bursts its banks the water tends to use the paved, impermeable surface as a routeway for movement. The urban road network provides a convenient substitute for natural channels providing water with a rapid means of moving across an urban area.
A great deal of the potential damage from a flood and even flash floods could be mapped using a detailed digital elevation model (DEM) and a knowledge of past events in an urban area. This will help map out previous routeways that surface flows have used. Future events may be harder to predict, as the urban infrastructure changes and precautions are taken by planners to block or re-route surface flow, then the microtopogaphy of the urban area may be a guide to patterns of surface flow but other factors will also affect the detailed routes the water takes. The local detail is a bugger for modelling flow patterns. It will be interesting to see what, if any, use is made of the information about flood damage from the recent floods. There is a great deal of information online from Twitter, as well as local blogs and newspapers accounts that could provide a great deal of information about how surface water moved through urban areas. The potential for ‘citizen science’, for ordinary people (a horrible term that seems to imply scientists and planners are extraordinary) to contribute to the scientific investigation of flooding is immense. Co-ordination of this type of information, the mere exercise of collecting and collating information, or judging its quality and usefulness fro modelling and understanding urban surface flow is immense. Time, expertise and, potential funds, are needed for these activities but by who is unclear. Once the aftermath of the floods disappears from public view, the chances of funding such work drops dramatically. The need for people, the public (rather than the ordinary – anyone got a better term that isn’t condescending?) to be involved is important, however, if some of the recommendations of the ASC report are put into practice. In particular, the emphasis on households undertaking property-level flood protection measures might be enhanced if they were also actively involved in monitoring and in the feedback loop from modelling studies of their local areas. This would not only mean they were better informed about the risks of flooding but also more likely to act in the manner hoped for by planners if they felt they were an active part of preventing flood damage rather than passive victims in urban flooding.





Saturday, July 14, 2012

Flooding and Development in the UK: Some thoughts on ASC Report

The recent ASC Report provides some interesting insights into current planning practices in the UK in relation to flooding. The ASC report (chapter 2) suggests that climate change is not the only factor increasing flood risk (page 27). It points out that risk changes if the probability of an event occurring changes OR if the consequences of an event alter. The first aspect is meant to cover the purely physical aspects of a hazard or rather of climate change in affecting flooding, whilst the second aspect is meant to relate to the socio-economic aspects of a hazard. It could be argued that a hazard isn’t really a hazard unless people are involved so an ‘event’ isn’t really a hazardous event unless there is a vulnerable population, so the two aspects may be more tricky to separate than appears. The two may go hand in hand.


Leaving this aside, the report does highlight the importance of the need for planning that bears these aspects in mind. The tables on pages 28 and 29 also provides some estimates of the amount of stock at risk from different types of flooding with 1.2 million properties at risk from river flooding, 230,000 of these at significant risk (a greater than 1 in 75 chance in any given year). The report points out that most floodplain development is within built-up areas that already have flood defences. Continuing development behind these existing defences increases the total value of assets that are being protected. In any cost-benefit analysis this increased value will make any future investment decisions easy – keep investing in defences as the value of assets keeps increasing. This means that current flood defences lock-in long-term investment, meaning that higher and stronger defences are continually required. The report recognises that this has been known for a while as the ‘escalator effect’.

The report also identifies that only a small number of planning applications have been approved when there was a sustained objection from the Environment Agency (EA). This implies that most planning applications meet the requirements of the EA for taking into account flood risk – a comforting point. However, the report also notes that, in general, local authorities are implementing national planning policy by continuing to build with protection in floodplains (page 36 of the report). In addition most flood risk management policies in plans focus on making development safe once the strategic decision to build in floodplains has been taken (page 37 of the report).

Combined this implies that floodplain development will continue and will produce an investment strategy for flood defences that encourages further development in already protected areas, forcing further and more extensive protection of these areas. The report states that the EA as taken a strategic approach to funding structural flood defences ‘targeting investment towards communities at greater flood risk and with the highest social vulnerability’ (page 40). The justification for this approach, however, is then expressed in terms of average cost-benefit ratio of 8:1, i.e. for every £1 spent on flood defences there is an expected reduction in long-term cost of flood damage of £8. This implies that social vulnerability is defined in terms of money as are any other benefits. This would suggest that assets that are relatively easy to attach a monetary value to will weigh heavily in these calculations. Again this would encourage development in flood protected areas as any additional value from the buildings an infrastructure will increase the cost-benefit ratio and so ensure the continuation of more, stronger and higher flood defences. Whilst here is nothing inherently wrong with this it does mean that if or when the defences are breached the cost of flood damage will be huge. How is this potentially high magnitude cost worked into the cost-benefit equations? Is the potential loss or cost discounted and over what time scale? How does the probability of such a high magnitude loss change as both aspects of flood risk mentioned above, the physical and the socio-economic, change into the future? Is the potential cost so huge that the defences must alwasy be enhanced into the future no matter what the rate of cliamte change or the cost? Is the current strategy commiting the future to locational inertia of housing, business and infrastructual investment?

Wednesday, July 11, 2012

Future Flood Risk in England: Committee on Climate Change Report

The recently published report ‘Climate change – is the UK preparing for flooding and water scarcity?’ an Adaptation Sub-Committee (ASC) Progress Report of the Committee on Climate Change, could not have asked for better timing really. The ongoing unseasonal rain (and by rain I mean torrential downpours rather than light summer showers) has brought the issue of flooding to the forefront of many people’s minds, not least those inundated with dirty brown flood waters. The headline grabbing pieces of the report are:


• floodplains have been developed faster than anywhere else in England over the past decade; one in five properties in floodplains were in areas of significant flood risk

• the current ‘build and protect’ policy will leave a legacy of rising protection costs; current levels of investment in flood defences will not keep pace with increasing risks with the number of properties at significant risk of flooding double by 2035

• increasing investment in flood defences and property protection measures could halve the number of properties at risk by 2035 (relative to current levels)

The recommendations of the report in relation to flooding (in the executive summary and helpfully laid out in a box on page 17) sound reasonable but they need to also be looked at in the light of the government’s recent modernization of planning policy (National Planning Policy Framework) which highlights the needs to undertake major and sustainable development in the UK in the near future. The ASC report advises that for flooding robust and transparent implementation of planning policy in flood risk areas is required and that local authorities should be consistent and explicitly takes into account long-term risk of flooding when deciding the location of new developments (page 17 of the report). In addition, there should be support for sustained and increased investment in flood defences by both private and public sources as current spending will not keep pace with the increasing risk. Failing increased expenditure, ways to manage the social and economic consequences of increased flood frequency should be identified. Lastly, there should be increased enabling of the uptake of property-level measures to protect against floods and encouragement of the increased use of sustainable drainage systems to manage surface water.

The National Planning Policy Framework highlights the importance of sustainable development particularly of housing and the importance of community involvement. The ASC report even recognises this on page 12 when it states that ‘Development in the floodplain may be rational decision in cases where the wider social and economic benefits outweigh the flood risk, even when accounting for climate change’ but then adds that in a review of 42 recent local development plans there was mixed evidence of transparency from local authorities in terms of locating development on areas other than floodplains and in including the long-term costs of flooding associated with climate change. So no cosnsitent approach to the issue yet.

So how do you decide (and who decides) if development on the floodplain is worth it or not? The decision seems to be located at the local authority level where contradictory messages are being delivered – develop sustainably (whatever that means) and don’t develop there unless you absolutely have to. There is also an assumption that the decision can be rationalised, presumably using cost-benefit analysis. This puts the valuation of property, the environment, business and infrastructure at the centre of any argument. How does this square with the ‘sustainability’ issues raised in the National Planning Policy Framework? Even worse it is not necessarily the current valuation that will be important by the future valuation of both the land and its use as well as the costs of clearing up flood damage. How do you work this out in a consistent and mutually agreed manner for the whole country for every land-use or potential land-use? Even if the local authorities made their valuations explicit would every stakeholder agree? What of localism as well? How much input will communities have into the decision-making process if valuation is the central pillar of decision-making for housing? Can they, the communities, compete in any discussion with the complicated economic modelling available to local authorities?

An interesting aspect of the report is the focus on ‘property-level protection measures’ by which they means things such as door guards and airbrick covers which the report points out require a take-up rate increase of 25-30 times by 2037 to reach all 200,000 to 330,000 properties that could benefit from their use. Although the report discusses these measures in combination with government investment as being a means of reducing flood risk I do wonder if this is more evidence of the ‘moral hazard’ argument coming into the discussion. In economic theory this refers to the tendency to take undue risks when the costs of those risks are not borne by the individual (or entity) taking those risks (a nice discussion of this idea can be found on Wikipedia – (I am not adverse to using this source if it is well done but wouldn’t advice it for any of my students reading this!) Property-level protection measures seem to throw responsible for tackling the risk (and increased risk of flooding) onto the property owner. Although not put into these words, does that mean it is their fault? They brought the house there, so the risks are theirs to bear? They have to implement property-level protection measures or else why should anyone else help them, such as insurers or local authorities? Odd when developers are allowed to develop on floodplains and then sell houses on – do home buyers have choice in where to buy if housing development is concentrated in floodplains or if those are the only locations they can find houses cheap enough (or in the right price range) for them to buy? Where exactly does responsibility for living in a floodplain lie?



Monday, July 9, 2012

Road Traffic Pollution and Death: Interpreting the Data

A recent report suggests that road traffic pollution causes 5,000 premature deaths a year in the UK, whilst exhaust from planes adds another 2,000 (http://www.bbc.co.uk/news/science-environment-17704116 for summary, the actual report is a paper in Environmental Science and Technology which is a journal for which you need a subscription). The numbers are comparable with those produced by COMEAP (Committee On the Medical Effects of Air Pollution) "The Mortality Effects of Long-Term Exposure to Particulate Air Pollution in the United Kingdom" which estimates air pollution was responsible for 28,000 deaths in the UK in 2008 (the more recent study estimates 19,000 deaths in that year). One interesting statistics from the more recent report is that road traffic accidents caused only 1,850 deaths in 2010, meaning that traffic pollution is a more potent killer.

So what can we make of these figures. The exact number of deaths depends on how you calculate 'premature' deaths. This means you need to extract from the number of deaths those that would not have happened had it not been for the pollution. This means using life-table analysis to predict survival rates of different age groups. If air pollution improves, for example, you might expect everyone to have an improved survival change but that this would be greater for young children than for people in their 80s. The children who benefited from the reduction in pollution have to die sometime so the benefit is not sustained indefinitely. This means that you have a dynamic or continually changing death rate based on a reduction in pollution levels.  The COMEAP report suggests that any benefits from reductions in air pollution should be expressed in terms of improved life expectancy or number of life-years gained but accept that the 'number of attributable deaths' is a much catchy way of expressing the information.

An interesting read for interpreting the 'deaths' is the appendix Technical Aspects of Life Table Analysis by Miller and Hurley. This short report goes through the technical aspects and assumptions involved in this sort of analysis. Be aware through it does get into the mathematics fairly quickly. Importantly, starting with 2008 as a baseline you construct an age-specific all-cause mortality hazard rates, hi, that acts upon age-specific populations, ei.  Additionally, the number of viable births into the future is taken to be the same as the 2008 baseline. Changing policies alters the 'impact factors' which differ by age group and time. By altering this impact factor you change the the hazard impact and so alter the mortality rates.


Understanding how 'deaths' are calculated and the assumptions involved are vital to interpreting the information provided. This tends to be particularly important when, as in this report, the 'deaths' are the end result of mathematically modelling of a data set and a series of key assumptions about the impact of different scenarios.  I am not suggesting that the mathematics is wrong, the use of life-table analysis has a long and profitable history in the insurance industry so the modelling is on a very sound base. The COMEAP report recognises this problem of interpretation (starting page 13) and knows that there is a trade-off between between full accuracy and accessibility. It is also acutely aware that the numbers are open to misunderstanding if the basis of their calculation is not understood. On page 14 of the report, for example, they state for the term 'number of attributable deaths' that:

To emphasize that the number of deaths derived are not a number of deaths for which the sole cause is air pollution, we prefer an expression of the results as “an effect equivalent to a specific number of deaths at typical ages”. It is incomplete without reference also to associated loss of life. The Committee considered it inadvisable to use annual numbers of deaths for assessing the impacts of pollution reduction, because these vary year by year in response to population dynamics resulting from reduced death rates.

In interpreting this type of data it is important to know how it was derived, to know if it was modelled, and if so how, and, as importantly, the exact technical definitions used for terms. The alternative is relying on others to interpret the data for you with all the attendant agendas potentially coming into play as they draw their conclusions.