Sunday, June 27, 2010

Icelandic Volcano: How Much Ash Is Dangerous?


Figure 1 Image ash plume from the Eyjafjallajökull volcano

I hate flying. No that is not quite true: I hate the thought of crashing, of a massive heavy metal object plummeting 30-odd thousand feet to the ground with me in it. I do fly though, I have to for work and for holidays, so I may not be the person you most want to sit next to on a plane. Having just made it back from a second year field class in Malta when the Eyjafjallajökull volcano erupted ejecting ash to between 20,000 and 30,000 feet, you can imagine I watched the news with great interest.

Aside from hearing news of the second year Berlin fieldclass who had to trek across Europe to the Belgium coast to get back, my interest was caught by the debate, carried out with more than a hint of repressed anger, between the Meteorological Office, the Civil Aviation Authority, the government and airlines. The Met Office view is outlined through their press releases (http://www.metoffice.gov.uk/corporate/pressoffice/volcano.html) whilst the BBC archives provide details of the chronology and debate, at least the public aspect of it (http://search.bbc.co.uk/search?go=toolbar&uri=%2F&q=icelandic+volcano and hunt around for something of interest).


Figure 2 Map of extent of ash cloud impact. Source: Metetorlogical Office

The particular angle I was interested in, however, was how this diverse set of actors came together because of a specific geophysical event and how what was viewed as ‘safe’ changed as the travel chaos unfolded at airport across Europe. Eyjafjallajökull itself could be seen as an actor in its own right with its own spatial extent, temporal behaviour and characteristics such as size and shape of ash particles released. The other actors in the drama, the Met Office, the CAA, the government and the airlines were all entwined in a complex web of relationships that focused on the definition of what was a safe level of ash for flights.

The definition of safe level was central to everything that happened in late April and early May 2010. Having never had such a massive eruption with a set of meteorological conditions that pushed the ash plume over the major flight paths across most of Europe the organisation assigned responsibility for safety fell back on the ‘safe’ position of stating a of stating a zero tolerance level (a little ash was allowed in the standard threshold of a concentration of 200 microgrammes per cubic meter), no ash you could fly, any ash (above standard threshold) you couldn’t. Given the damage that ash plumes had caused for aircraft engines in the past this seemed a ‘safe’ position.


Figure 3 British Airways engine after a run in with a volcanic ash plume in 1982. Image: Eric Moody, British Airways

But how did the CAA know this. Advice is provided by VAAC (Volcanic Ash Advisory Centres – see http://www.ucl.ac.uk/news/news-articles/1004/10041901 for an outline of the global warning system and its history). But how do they know? There may have been 80 incidents since 1982 but the Icelandic eruption was something different because of the geophysical conditions, a continuous stream of ash and meteorological conditions that meant it affected European airspace. The other actors in the network, once the duration of the hazard became clearer, did not passively sit there and accept the CAA advice and the Met Office evidence. BA, for example, undertook a ‘test’ flight through the ash cloud and, emerging safely the other side, declared they felt there was no danger. Likewise, as travel chaos grew, the airlines questioned the evidence upon which the advice was based. The focus of their attention was the use of modelling rather than monitoring to predict ash cloud movement. Despite using such modelling techniques to predict weather patterns that airline use, the ash cloud models were heavily criticised for not matching the reality the uninstrumented ‘test’ flights of the airlines showed.

The definition of ‘safe’, a fixed thing you might think, became a subject of negotiation within the network based on the interest of each of the actors. The details of the zoning of the ash cloud can be found at http://www.metoffice.gov.uk/corporate/pressoffice/2010/volcano/forecasts.html. Black zones have 20 times the standard threshold ash concentration (concentrations of over 4,000 microgrammes per cubic metre), grey zones have concentrations between 10 and 20 times the standard threshold (concentrations of 2,000-4,000 microgrammes per cubic metre). The standard threshold of concentrations of 200 microgrammes per cubic metre was used to define the edge of red zones. Each zone had associated with a it an additional definition – red zones stated the concentration was as used in official VAAC products. To operate in grey zones airlines had to present the CAA with a safety case that included the agreement of their aircraft and engine manufacturers. Black zone were stated to be zones where the required tolerances of engine manufacturers were exceeded.

So is ‘safe’ a fixed term, something that is unaltered by circumstances, by context? The above summary would suggest not. ‘Safe’ levels of ash became a term that could be defined, redefined and negotiated between the actors. Scientific evidence, which you may think could decide the issue, was itself open to debate and discussion. In my next blog about the ash cloud I will look at this negotiation of evidence in more detail

Tuesday, June 22, 2010

BP Oil Spill: Disaters and Swiss cheese


Source: Charlie Riedel: Associated Press

The explosion on April 20th 2010 and subsequent oil spill in the Gulf of Mexico is a major environmental disaster. Numbers can be trotted out to place it in context but the perception and reality is that this is a catastrophe for everyone; for the Gulf, for the environment, for jobs along the Gulf coast and beyond, for the US government and for BP. There are technical questions about how it occurred and serious concerns about the clean up but there are other more generic issues about risks and hazards that this major environmental disaster highlights.

The catalogue of BP ‘errors’ in procedure have been chronicled in the open in front of a congressional panel. BP are said to have cut corners in well design, cementing and drilling mud and installation of safety devices – lockdown sleeves and centralizers. The choices BP made produced a route to disaster that, although not a perfect fit match, the ‘Swiss cheese’ model. The Swiss cheese model was developed by James Reason in 2000 and is based on the idea that in any complex system the route to a disaster is prevented by a series of barriers. These barriers can be procedures, safety equipment, morals, anything that will restrict or constrain the actions of both the people involved and the natural phenomenon involved in the complex system. Reason viewed the system and randomness as being essential in a hazard being realised. With some modifications the same type of model can be applied to the oil spill and BPs catalogue of errors.




Reason's Swiss cheese mdoel of disaters

The layers of the ‘Swiss cheese’ are the barriers that are meant to prevent the disaster that unfolded, each slice is anything that prevents the trajectory of a disaster so that could be procedure, person, technical specification designed to prevent a blow out. The holes in the layers are the weak spots, the holes or gaps that allow ‘mistakes’ to be made. Individually, these mistakes may not be an issue. If the other holes aren’t in lien then the next barrier prevents the trajectory of disaster. Collectively, when all the holes are aligned, disasters occur. The layers can be thick or thin, heavy or light regulation of an industry for example, and likewise the holes can be large or small, gapping omissions from safety protocols or tiny, repetitive practices that for years haven’t been an issue because the other holes haven’t been in alignment.

The list of BP ‘errors’ or holes in each layer meant to prevent a trajectory to disaster is long and seems to be ever expanding. Depending on which reports you read the mistakes those in the table below (derived from tampby.com and businessinsider.com).


  • Well design and maintenance:
    April 14th-15th: BP granted permit changes to speed up its over-budget drilling operation in Deepwater Horizon in addition to its existing ‘categorical exclusion’ from 2009. BP allowed to install cheaper, smaller single pipe. Double-lined pipe would offer protection from escaping gas.
    Gaps in pipe segment could have released a blast of gas to the surface
    Lack of O-ring seal could have let a blast of gas up the pipe
    Drilling chief noticed ballooning of the well walls

  • Contaminated cement in capping operation (possibly):
    April 20th : Contractors Halliburton trying to temporarily plug and cap well. Technicians noticed pressure rise that suggested cement not holding. One test showed a ‘very large abnormality’, another test was misread and well declared safe.

  • Alleged BP ‘company man’ over-rule:Despite rising pressure process of replacing drilling mud with seawater began, a standard practice if no pressure problems. The objections of workers were over-ruled by BP ‘company man’. Rise in pressure resulted from oil and gas rising in well.

  • Hesitation in safety procedures?:
    Technicians waited for official approval from BP before turning on blowout preventer. There was no hydraulic pressure when it was switched on. There is debate whether this equipment would have worked anyway in a deepwater well.
    No evacuation of rig ordered despite abnormal test results.

  • Weak initial reconnaissance:
    April 22nd – Remote robot sent to well head – no leak detected

  • No sonic testing:
    BP had no plans to conduct a cement bond log test which uses sonics to identify any weaknesses in cement – source calls it a gold standard test.

  • Fake testing?:An employee has indicated he saw evidence of test results on blowout preventer being faked.

  • BP response plan: Aside from the obvious of having a dead man as one of your specialsit, BP only had a generic response plan for the Gulf of Mexico not a specific plan for Deepwater (granted exemption). Delays in getting to survivors of the explosion and the generic plan having a worse case scenario of only 20,000 bbl are just two examples of problems with the generic plan.

  • Research delays:
    BP spent weeks after the explosion researching how to stop the leak. No research in place on how to stop leak at this depth.

So what does this evolving list of errors actually tell us?


If you divide the items above into pre-event, event (simplistically mapped out below) and post-event and apply this to the Swiss cheese model, then it is clear that there were systemic holes in BP’s supposed barriers to such a disaster before the first well was even drilled. On the day of the explosion, further holes emerged in barriers, the safety procedures, that were meant to be in place and finally after the explosion the generic nature of the response plan was exposed as inadequate, more holes appeared as events unfolded. This is just one set of ‘mistakes’, other Swiss cheese figures could be constructed to illustrate others and added to as more information becomes available about the disaster.




PRE-EVENT



EVENT

What the Swiss cheese model can’t tell you is why the holes appeared in the layers and why layers thinned. Greed has been put forward as a motive by newspapers, Gulf residents and politicians. Other oil companies have testified to Congress that they would not have made the same key decisions as BP. So is it that simple? A greedy company cut corners to keep costs low and endanger the environment? Although there may prove to be an large dose of truth as the lax protocols and potential company over-ruling of workers is assessed in this another set of questions also need to be asked. Why was BP allowed the exemptions and exclusions? Why were protocols not followed? Why was testing not carried out? These seem to be some of the obvious questions and ones specific to this incident.


It may be as useful to look at the context of the drilling as much as the detail. BP were undertaking deepwater drilling, a relatively new venture for oil companies. How much of the protocols and systemic behaviour was based on BP’s experience on shallow drilling and the latitude in safety measures and well maintenance that that experience implied? If BP’s past experience was the basis for their practices in the Gulf of Mexico then it would appear that the past, the different drilling context, may not be a useful guide to the dangers of the present novel drilling operations. In other words, in this new context is knowing the way the system operated in the past, where the holes in the layers were, how thick the layers were, sufficient to ensure that drilling is safe. How have other oil companies translated and interpreted, improved safety protocols, well designs and a myriad of other parameters to take account of the new dirlling context? Often without a disastrous event, the assumptions of operating in a new context are not questions. Maybe BP’s legacy will be a review by all, compmaies, government adminsitrations and safety officials of the new holes in the cheese in this new context.

James Reason has publsihed a book on accidents taht develops this type of model.


Environmental Geography - why?

This blog will, I hope, help people to understand what Environmental geography is and why it is an important way of looking at the world.

I am a Principal Lecturer in Geography at the University of Portsmouth and in 2009 I, with my colleagues, started an undergraduate degree course in BSc and BA in Environmental Geography. Three of us, myself, Brian Baily and Julia Brown, researched the market and felt that what we believed to be the important aspects of environmental geography were not being taught in other courses of the same name (or if they were it wasn’t immediately clear from course outlines). We feel that environmental geography should encompass both physical and human geography and act as a means of integrating and melding the two substantive fields of traditional geography. I could go on about how environmental geography provides an interface, a means of bridging the arts and sciences, but really this view of geography has been a key focus of the geography since it developed as a university level subject back in the 1887 with Halford Mackinder at Oxford (date of his ‘On the scope and methods of geography’ paper delivered at the Royal Geographical Society, RGS. He was appointed Reader in Geography at Oxford within six months of this paper).

Aside from the academic pursuit of the subject, we feel that taking an informed ‘environmental’ perspective on the various issues and problems confronting people on a global and local basis can help in understanding the context of these problems, how they are themselves constructed and, dare we believe, even provide possible solutions to these issues. The last suggestion may be a forlorn hope but at least if you appreciate why an issue is so complex it may help in trying to understand how different interests have such difficulty trying to solve a problem. All three of us have a particular view or stance on environmental issues; we are not politically neutral and would be wary of any one who claims otherwise. This does not mean, however, that we do not try to comprehend why others approach, understand or even identify environmental issues differently from ourselves – this is all part of environmental geography.

We feel that it is important to understand both the physical and human processes that underlie environmental geography, that drive environmental change and stability but it is not enough just to understand each part in isolation. The two must be brought together and the difficulties and complexities of that assimilation of different knowledges recognised. Above all understanding the environment is as much about politics as it is about science – a key element we felt was not explicitly developed or at the forefront in the course outlines we saw. You can collect all the data about climate change that you want, you can validate the science but if no-one acts upon it then ‘scientific objectivity’ means very little. Understanding how different systems of knowledge merge and interact is a key feature of understanding the production of environmental geography.

The blog will cover a whole range of topics and I hope to upload new content on a fairly regular basis – once a week at least – or two if work interferes! I will divide the blog, initially at least, into History of Environmental Thought, Monitoring the Environment, Environmental Hazards, Environment and Society, Environmental News. I do not, however want to be too rigid in how the blog develops – feedback is welcome and essential for me to gauge if there is anyone out there reading this and, if so, what really interests them.

This blog will, I hope, build up into a useful resource for students undertaking geography GCSE, A level and undergraduates as well as informing anyone who is interested in environmental issues in general. The geographic perspective may provide something new to your thinking or it may not, but at least I hope it is useful.

Environmental Disasters

We remember disasters. Pictures stream across our television screen, graphic images of the misery of death and the chaos of destruction. Hundreds, even thousands of people die, their agony captured and rerun in digital formats across the Web. Scientists tell us what happened and why, politicians bemoan the lack of warning and the poor die. Boxing Day 2004, Katrina 2005, Haiti 2010 – just dates and places but there is an immediate recognition of what they refer to.

How can we study these events and, importantly how can we understand and prevent them? This is a set of questions that has long been a staple part of academic study and it has a distinctly geographical dimension. There are three main approaches to trying to understand hazards and disasters: the dominant, the developmental and the complex. They could be seen chronologically, with the last being more sophisticated than the first or they could just be seen as different ways of looking at the same thing. A key thing to bear in mind is that there is usually a distinction between a hazard and a disaster. A hazard is the potential or possibility for damage or harm; the vulnerability for a loss. Disaster in contrast is the realisation of that potential.

The Boxing Day tsunami of 2004 and the Haitian earthquake of 2010 were destructive by any measure you care to use. Death tolls of over 200,000 almost outstrip comprehension. Whole cities levelled, communities ripped apart, national economics shattered. How could we study such events? The dominant approach (or behavioural paradigm according to Smith and Petley, 2009) takes what many would call as a very rational, scientific view of a disaster. A disaster is the result of an extreme geophysical event. The geophysical causes the problem and it is often seen as a matter of luck or not if people are harmed. Studying how people behave before and during a disaster, understanding their rational decision making and where they are irrational is the basis for developing management tools for organising populations. Civil authorities tend to view such events as distinct and discrete breaks with normality; times of crisis for which special plans, actions and laws (or lack of liberties) are needed.

In addition, understanding detailed scientific analysis of the geophysical processes that produce extreme geophysical events provides the information for attempting to predict the location and timing of such events. Of course such detailed study requires extensive and often expensive monitoring systems as well as well integrated early warning systems and a civil authority able to rationally plan for what to do with a population once the sirens sound. Such an approach could be called a ‘technological fix’ approach to disasters. Leave it to the experts and everyone might be saved.

How does this view play out in reality? Let’s take FEMA’s webpages on the National Earthqauke Reduction Programme (NEHRP) as an example (www.fema.gov/plan/prevent/earthquake/nehrp.shtm or www.nerhp.gov for the NEHRP webpages). The opening statement highlights the dominance of the dominant view of disasters.

‘The National Earthquake Hazards Reduction Program (NEHRP) seeks to mitigate earthquake losses in the United States through both basic and directed research and implementation activities in the fields of earthquake science and engineering.’
Reading the Strategic Plan for the National Earthquakes Hazards Reduction Programme (Fiscal Years 2009-2013) (http://www.nehrp.gov/pdf/strategic_plan_2008.pdf) there are three key goals –

A: Improve understanding of earthquake processes and impacts;
B: Develop cost-effective measures to reduce earthquake impacts on individuals, the built environment, and society-at-large;
C: Improve the earthquake resilience of communities nationwide.

Within these goals there are a number of objectives but each can be seen to be using language associated with the dominant paradigm. For goal A, for example, objective 1 is ‘advance understanding of earthquake phenomena and generation processes’, basically do fundamental science into the geophysical phenomenon. Even when people and society are considered as in objective 3, it is viewing actions as being amendable to study using similar rational methods as those used for the geophysical phenomenon. Likewise for goal B, the objectives talk about developing tools for assessing loss and risk, implying that everything can be allocated a number, a quantity for comparison. Even goal C is couched in these terms. Objectives 11 and 12, for example, seek to promote or support the implementation of public and private standards in building codes and policies, whilst objective 10 focuses on developing more comprehensive risk scenarios for planning actions, presumably at an appropriate organisational level.

Seeing disasters as driven by the geophysical event, as crisis that need crisis management responses and as understandable via scientific analysis and usually via quantification of impacts is not necessarily wrong. It is vital to know what the geophysical event is and how it varies. It is vital to understand how buildings behave in earthquakes and build accordingly. But are people simply rational entities that will be told what to do by authorities? Is it simply a case of knowing more and telling everyone – will this reduce disasters?

In my next blog will explain how people’s behaviour has been studied and how rational decision making is incorporated into this dominant paradigm.

Environmental Geography -the key questions


Geography is often said to supply the ‘where’ bit of the set of questions ‘how, what, where and why?’ This blog views geography much as another environmental geography blogger does (http://environmentalgeography.blogspot.com/). In this blog geography asks the questions – where is it, why there and so what? This blog adds a bit more. Geography asks what is it, where is it and why there and not somewhere else and then so what? Geography looks at both the static questions of what and where as well as the more dynamic questions about why and so what. By combining these, the static and the dynamic, you get an understanding of not only what is going on but also why.



OK so in English what does that mean? Take a pollution incident. The first question is what is it? What is the pollutant? The next question is where is it? Which bit of the environment is it in and is that important? A release of sulphur dioxide from coal fires would produce a stream of gas in an urban area. Where it is important as the sulphur dioxide could affect human health if concentrations rose high enough. Likewise if a specific meteorological condition occurred, such as a blocking high pressure system, then smoke and sulphur dioxide could remain in the urban area and concentrations build up to such an extent that some people have difficulty breathing whilst others collapse and die. This is not a random example as Londoner over 60 would know. The Great Smog or Big Smoke of Friday 5th to Tuesday 9th December 1952 was the result of the interaction and coincidence of large releases of smoke and sulphur dioxide from low quality coal from power stations and domestic fires and the presence of an anticyclone over London from 4th December. The resulting temperature inversion over London effectively trapped the pollution.








Figure 1: Nelson's coloumn nearly hidden in the Great Smog

The last question is why there and not somewhere else. Pollution wasn’t uncommon in 1950s London. Low quality coal, cheap fuel in post-war Britian, had been used before; power stations were not new inventions. Likewise, anticyclones are not an unusually weather phenomenon. So the question is why there and why then? The preceding days had been cold; more coal was being burnt than usual. Diesel fumes added to the usual mix as relatively new buses took over from the recently defunct tram system. Mix in the pollution from industrial Europe that had blown across the Channel in the days before and the amount of pollution was higher than usual and not a breathe of wind to disturb the stillness of the brown shroud of pollution and all the elements come together to explain the why.



Figure 2: Carrying on inthe Great Smog


The why doesn’t necessarily stop there. You could ask why was cheap coal needed? Why was the tram network removed? Why didn’t authorities predict the dramatic health problems the smog would produce – an estimated 12,000 people in the following weeks and months, mostly young, old and people with pre-exiting respiratory problems?


Figure 3: Deaths due to the Great Smog (source for image -Wikipedia)

You could also then explore the so what question. What was the significance of the Great Smog – in other words does it matter? At a micro scale every life lost dramatically answers the so what question. The individuals are not just numbers but people who had families, jobs an existence beyond the point they became in a historic graph. At a national scale, the so what is answered by the Clean Air Acts of 1956 and 1958, a direct outcome of the havoc caused by this pollution incident.