Showing posts with label hazards. Show all posts
Showing posts with label hazards. Show all posts

Friday, March 8, 2013

Haddon Matrix and ‘Black Swans’



The Haddon Matrix is an extremely useful way to express the factors associated with a hazardous event and the changes that need to be affected in the host, the equipment and the environment (both social and physical). I have covered the Haddon Matrix in a previous post, in fact to date the most popular post on this blog. I am not denigrating the Haddon Matrix and its usefulness but recent publications Nassim Nicolas Taleb such a The Black Swan: The Impact of the Highly Improbable (2007, second edition 2010) highlight the potential of unexpected, rare events in systems. Taleb does not believe that effort such be wasted trying to predict these rare events but rather than robust systems should be devised to avoid the negative impacts of these events. So does the Haddon Matrix help to prevent hazards or accidents when a Black Swan strikes?

The Haddon Matrix tends to focus on specific events and their immediate impact. The ‘classic’ example often seen on the Web is a car accident where there is a clearly defined agent or host, a clearly defined piece of equipment and a fuzzy but often clearly defined environment at least in the mind of the person who constructs the matrix. The matrix is focused on a particular event usually one that is well known to the person constructing the matrix. The event is singular and derived from thinking about common scenarios of ‘what ifs’. Importantly, the event is divorced and isolated from its complex context. The event is treated as an individual example of an oft-repeated set, as an individual example of a particular kind of hazard or accident. This means that the contours of the event are relatively well know, the limited impact and the limited range of changes that need to be made to the host or equipment clearly demarcated. The event is somewhat simplified by removing it from its context.

Rare events can also be considered within the Haddon Matrix and planned for but events that have never happened or are not within the experience of the constructor of the matrix can not be considered. A series of events could be dealt with by interlinking matrices or even by using Reason’s Swiss cheese model of accidents but each matrix or cheese slice will deal only with a single event not the interconnected system as a whole not the complex and potentially unique relations that these rarities activate within the whole system of which the event identified is only a part. In this case, however, the accident or hazard itself is actually a chain or web of events operating in unison under the influence of the rare event. The exact connections in the system will give the rare event its character. Given the rarity of the event can you be sure that when it happens again the system will be connected, or rather interconnected, in exactly the same manner and so will the precautions that you take have to be exactly the same? As the complexity of the system behind the hazard or accident you are dealing with increases then the possibility that impacts will occur via different connections or pathways is likely to increase. A static Haddon Matrix may not be able to cope with such dynamism that a Black Swan generates within a system.

Black Swan events may also imply that there are two classes of hazards or accidents that need to be considered. The first is the hazard that is known about, one for which have occurred and reoccurred again and again with sufficient regularity that their characteristics can be well defined and clearly defined steps taken to prevent their escalation. The second class of hazards or accidents are those that occur so rarely that each instant is a novel and unusual case with its own set of peculiar characteristics. These events are so infrequent that no reasonable plans can be made to prevent them. It is only after they have happened that we can understand why they happened, what aspects of the system were compromised and then take steps to ensure that the same pathways to failure do not happen again, although the next Black Swan event may be so different as to circumvent our efforts.

If the Black Swan, almost by definition, falls outside the experience of the matrix constructor then is the matrix of any use in these cases? Black Swans may not be predictable but that should not stop attempts to build a robust system to manage impacts. A densely connected system is likely to transmit impacts rapidly from one part to another, maybe along channels or by connections that can be predicted as weak links or pinch points.  Ensuring that there are ‘firebreaks’ in the system, potential break-points in its connectivity, could help prevent a systemic failure even if the exact nature of the rare event is unclear and unpredictable.


Tuesday, February 5, 2013

Antifragility and actor networks


Nassim Taleb’s recent book on Antifragility may have some implications for understanding actor networks. Taleb suggests a triad of system types: fragile, robust or resilient and antifragile. Fragile systems are ones that collapse under stressful events, robust or resilient systems remain relatively neutral in the face of stressful events, whilst antifragile systems positively respond, strengthening under stressful events. Within hazards analysis such a distinction might be very helpful in separating out communities that are vulnerable to hazards, those that hold their own and those that thrive in adversity.

 

Actor network theory, as outlined in an earlier blog, is a very useful method for mapping actants (human and non-human), their relationships and how these relationships operate in changing contexts.  Leaving aside the complicated and, sometimes competing, definitions and deep conceptual issues of this approach, there is much in the simple drawing of nodes and relations that could help in identifying the basis of antifragile behaviour as illustrated in the simple network below.

 

Actants in the network may try to align and co-ordinate the network of relations to produce the outcome that they desire. Supermarkets put pressure on farmers to produce vegetables for them, controlling the prices asked for vegetables, the transport available for vegetables and even finances by tying farmers into specific contracts. In other words, the supermarkets are key actants who have extended their co-ordination and alignment of the network in such a manner as to virtually control how it operates. But is this network fragile or not?

 

Questions of fragility and antifragility can be answered only when the network is stressed, only when an event causes disruption. The nature of such events will vary with the nature of the network; event characteristics that cause network disruption will always be context dependent. This means that it may not be possible beforehand to predict the fragility or otherwise of a network. It is only when under stress that parts of that network may buckle or may develop novel means of relieving or even using the stress to strengthen the network.  Likewise, events can be propagated through the network in a variety of ways, so although one seemingly similar event may point to a stress point or relationships in the network, once that stressed node is 'fixed', the next event may pick out and illuminate another, different stress point. 

 

Antifragile behaviour can result if an actant can exploit the stress within the network to ensure that their vision or goals for the network are increasingly likely after the disruptive event. This realignment or co-ordination of the network could result from taking over the function of other nodes or exploiting a relationship that enables an actant to more deeply embed the relationships it needs to achieve its ends as in the figure. A particular bad year for crops due to drought, for example, could provide an opportunity for farmers who invested in irrigation methods to dictate prices to major suppliers or to cheaply buy up the land of farmers who did not invest in irrigation. The relationships and nodes existed before the disruptive event, but the multiple impacts (or the multiple manners in which the event plays out in the network) open up a range of opportunities for antifragile actants. It is important to note that antifragility is only definable in relation to the disruptive event (or the multiple manifestations of that event). Similarly, antifragility is only noticed if actants exploit the disruption to improve or enhance their own position and power (expressed through alignment and co-ordination of the network).

 

It is through actions within the network of relationships that antifragility and fragility is expressed. It may be possible to begin to identify some general properties of actants and relationships that may enable antifragility, but it is only through the expression of these properties by actant’s actions during and after a disruptive event that such properties will be identified as important. Future blogs will begin to characterise such network based properties by exploring the response of actants to specific disruptive events.

 

Thursday, November 1, 2012

Mapping Hurricane Sandy

I am sure it is not news to a lot of people that the path, destruction and emergency facilities for Hurricane Sandy have been mapped.  The BBC provides a map of damage and emergecny faclitites (http://www.bbc.co.uk/news/world-us-canada-20154479). Likewise other news organisations provide similar information to readers, but these maps tend to be fairly static in providing a single set of inforamtion. Although you can zoom and change scale on the maps, they have a relatively low level of interactivity. These maps also rely on information being provided to the organisation from another source.

Integrating data sources and providing maps of informtion that is of use to a range of users is another, more difficult task. One of the key players in online mapping has been Google Interactive Maps ( http://google.org/crisismap/sandy-2012). The map for Hurriance Sandy is produced under the umbrella organsiation of Google Crisis Response (http://www.google.org/crisisresponse/) a Google project that collaborates with NGOs, government agencies and commerical organisations to provide information on things such as storm paths and emergency faciltities. Integrating these data sources to provide spaitally located information of use to responders, locals affected by the hazard, interested news readers and authorities involves a clear understanding of the requirements of each target audience and clear planning of the nature of information presentation for each of these. It is worth having a look at the interactive maps to assess for yourself whether this complex task has been achieved. In particular look at the type of information provided about particular resources, the nature of the resources mapped and the scale or level at which this information is displayed and to which part of the audience you beleive this information will be of use (why you think so is the next question).

It is also useful to compare this structured information with the less structured flow of information from Twitter (http://www.guardian.co.uk/news/datablog/2012/oct/31/twitter-sandy-flooding?INTCMP=SRCH). This analysis by Mark Graham, Adham Tamer, Ning Wang and Scott Hale looked at the use of the term 'flood' and 'flooding' in tweets in relation to Hurricane Sandy (see also blog at http://www.zerogeography.net/2012/10/data-shadows-of-hurricane.html. Although they were initially assessing if there was a difference in English language and Spanish language tweets about the storm, their analysis pointed out that tweets were not that useful at providing information aobut the storm at a spatial resolution of smaller than a county (although it isn't clear to me at least if they were mapping the location of the tweets or the contents of the tweets - in some cases the latter might provide some more detailed spatial information on flooding and its impact but would require extraction and interpretation from the tweet itself). This lower spatial resolution and the unfiltered, personal, subjective and unco-ordinated nature of this information source means that it is more difficult to quickly translate into information that is 'useable' by other audiences. Mark Graham is a researcher at the Oxford Internet Institute (http://www.oii.ox.ac.uk/) and his webpage (http://www.oii.ox.ac.uk/people/?id=165) and blog are definitely worth look for anyone interested in mapping and internet and mobile technologies.

 

Monday, October 29, 2012

The L’Aquila and Legal Protection for Scientists

Charlotte Pritchard’s recent BBC article (http://www.bbc.co.uk/news/magazine-20097554) raises an interesting question – should scientists stop giving advice and, if not, should they have professional indemnity insurance? This cover, as a lot of eager insurance website will tell you, is designed to protect professionals and their businesses if clients (or a third party) make a claim against them because they believe that they have suffered loss due to non-performance, breach of contract or professional negligence or all the above. Insurance cover is up to a maximum limit outlined in the policy and, presumably, based on the losses for similar types of professional activity in the past and the likelihood or probability of a claim being made against a particular type of professional. Such policies are standard tools of legal protection for architects, engineers, business consultants, insurance brokers (ironic?), solicitors, accountants and independent financial advisers. Pritchard points out that even the Met Office have a professional indemnity self-insurance fund in case a forecaster fails to predict a flood that results in the loss of life (how things have moved on since Michael Fish!)
Transferring this type of policy into the academic realm is not unusual, several of my colleagues have such policies when they undertake consultancy work and a lot of universities commercial branches offer such cover. A key question that should be asked is, is the nature of the information being provided the same for all these professions or is the scientific information of a different type? Is it the information that is the same or is it the intended use of that information that requires the provider to have legal protection? If I were an engineer advising on a building project, the audience, the investors, the builders, etc, employ me to ensure that the result is satisfactory – the building stays up (simplistic but you get the idea). There is a definite, time limited and clearly defined outcome that my advice is meant to help achieve. Is this the case for scientific advice about the possibility of a hazarduous event? Is the outcome clearly defined or is there variability in the expectations of the audience and those of the information provider? Aren’t experts in both cases offering their best ‘guesses’ given their expertise and standards in specific areas?
The development and (by implication from Pritchard) the almost essential nature of legal protection for people giving advice tells us a lot about current attitudes or beliefs about science and prediction. Pritchard quotes David Spiegelhalter, Professor of Public Understanding of Risk at the University of Cambridge as stating:

“At that point you start feeling exposed given the increasingly litigious society, and that's an awful shame…. It would be terrible if we started practising defensive science and the only statements we made were bland things that never actually drew one conclusion or another. But of course if scientists are worried, that's what will happen."

The belief that science can offer absolute statements concerning prediction underlies the issue at L’Aquila. Despite the careful nature of the scientific deliberations, the press conference communicated a level of certainty at odds with the understanding of seismic activity and at odds with the understanding of the nature of risk in seismology. Belief that seismic events are predictable in terms of absolute time and location is at odds with what is achievable in seismology. By extension this view believes that scientists understand the event and understand how it is caused and so understanding causation leads to accurate prediction. This ignores the level and nature of understanding in science. Scientists build up a model, a simplification of reality, in order to understand the events, the data that they collect. This model is modified as more information is produced but it is never perfect. The parts of the model are linked to one and other by the causes and processes the scientist believe are important, but these can be modified or even totally discarded as more events, more information is added. So it is feasible to understand, broadly, how seismic events occur without being able to translate this understanding to a fully functional and precise model of reality that can predict exactly when and where an earthquake will occur.

If scientists are held legally to account for the inexact nature of the scientific method then there are major problems with any scientist wanting to provide any information or advice to any organisation.

Communicating the inexact nature of our understanding of reality, however, is another issue. If the public and organisation want accuracy in predictions that scientists know is impossible then the ‘defensive science’ noted by Spiegelhalter, will become the norm in any communication. Bland statements of risk of an event will be provided and, to avoid blame, scientists and their associated civil servants will always err on the side of caution, i.e. state a risk level that is beyond the level they would state to colleagues. Even this type of risk communication carries its own risks – stating it will rain in the south of England on a bank holiday could deter visitors to the seaside and when that happens then couldn’t business in coastal resorts sue or provide their own information (Bournemouth launches own weather site - http://news.bbc.co.uk/1/hi/england/dorset/8695103.stm )? If the reports conflict then who should the public believe?

Bland science implies communication that scientists perceive to be of least risk to them personally. This could vary from person to person and from institution to institution so the level of ‘risk’ deemed acceptable to communicate as ‘real’ to the public will begin to vary.

There is no easy answer to this issue and whilst there isn’t then legal protection sounds a reasonable way to go if you want to make your scientific knowledge socially relevant. It may, however, be worth thinking about the ideas scientists try to transmit as messages. Three simple things then spring to mind: what is the transmitter? what is the message and what is the audience? The scientist (transmitter) will have their own agenda, language and views on the nature of the message. The message itself will be communicated in a specific form along specific channels, all of which can alter its original meaning or even shape its meaning. Likewise, the audience is not a blank, passive set of receivers – they have their own views, agenda and will interpret the message as such. More time spent understanding how the scientific message is communicate may help to ensure that the message is interpreted by the audience(s) in the way the scientist intended.

Wednesday, August 22, 2012

Global Death Toll From Landslides

A recent paper by Dave Petley in the journal Geology takes data from across the globe to quantify and map the spatial variation in deaths caused by landslides in terms. The data is for non-seismic landslides between 2004 and 2010, over 32,000 deaths and over 2,500 landslides. A number of ‘hotspots’ are identified including the Indonesia, southern and eastern coastal region of China and central China (Sichuan basin). The report is useful in identifying recent patterns in deaths and so, by implication, landslide risk. Dave Petley suggests that:


"Areas with a combination of high relief, intense rainfall, and a high population density are most likely to experience high numbers of fatal landslides"

For information on the paper go to the BBC report or to Dave's landsldie blog

Wednesday, July 25, 2012

Institute of Hazard, Risk and Resilience at the University of Durham

An extremely useful website is the Institute of Hazard, Risk and Resilience located at the University of Durham. They have just published their first on-line magazine, Hazard Risk Resilience that outlines some key aspects of their research and is well worth a look (also their blog now linked at the side of my blog). In addition the site contains podcasts on aspects of hazards.


An important research project for the Institute is Leverhulme funded project on ‘Tipping Points’. Put simply ‘tipping points;’ refer to a critical point, usually in time, when everything changes at the same time. This idea has been used in describing and trying to explain things as diverse as the collapse of financial markets and the switches in climate. The term ‘tipping point’ (actually ‘tip point’ in the sutdy) was first use in sociology in the 1957 by Martin Grodzins to describe the ‘white-flight’ of white populations from neighbourhoods in Chicago after a threshold number of black people moved into the neighbourhood. Up to a certain number nothing happened, then suddenly it was as if a large portion of the white population decided to act in unison and they moved. That this action was not result of co-ordinated action on the part of the white population suggested that some interesting sociological processes were at work. (Interestingly, I don’t know if the reverse happens or if research has been conducted into the behaviour of non-white populations and their response to changing neighbourhood dynamics). Since about 2000 the use of the term tipping point has grown rapidly in the academic literature, a lot of the use being put down to the publication in 2000 of ‘The Tipping Point: How little things can make a big difference’ by the journalist Malcolm Gladwell (who says academics don’t read populist books!)

Research suggests that the metaphor of a ‘tipping point’ is a useful one in getting across a lot of complex and complicated processes and changes that occur in socio-economic, political and physical systems. One of the foci of research in the project is on trying to assess if this metaphor does actually describe, quantitatively or qualitatively or both, real properties of systems. Another focus of the research  are concerned with exploring how the metaphor becomes an important aspect of the phenomena being researched, even taking on the character of an agent in the phenomena itself. Importantly, the project also considers what it means to live in a world where ‘tipping points’ abound and how important anticipatory understanding is for coping wit that world.

Tipping Point by Malcolm Gladwell





Saturday, July 21, 2012

Media and Hazards: Orphaned Disasters

An interesting blog published in 2010, Orphaned Disasters: On Utilising the Media to Understand the Social and Physical Impact of Disasters’ poseted by KJ Garbutt, looks at how the media views, priorities and ignores disasters producing what the author calls ‘orphaned disasters’. It has some interesting points to make. The blog has a link to Masters research undertaken at the University of Durham on which the blog is based.




Monday, July 9, 2012

UK Real-time Flood Alerts Online - Using Information in Novel Ways

A BBC report on 6th July (http://www.bbc.co.uk/news/technology-18740402) informs its readers of the online launch of a real-time flood alerts map developed by Shoothill, a Shrewsbury-based company, which uses data from the Environment agency network of monitoring sites. Users can zoom into the map and see flood alerts and warning as issued by the Environment agency within the previous 15 minutes.


The site is worth a visit but it does beg the question, particularly as the unseasonably weather continue in Britain and elsewhere – what does this company add to the existing EA site that makes it more useful? The EA flood warning front page (http://www.environment-agency.gov.uk/homeandleisure/floods/31618.aspx) shows a map of Britain that you can click on by region and then text information on flood warnings including locations are provided. Clicking further through the individual warning locations provides more detailed information. The Shoothill site provide the same information if you click on the symbol on the map.

The answer seems to be that the Shoothill site provides the information visually linked to a map. Is this such an advance? It seems to be and it indicates a key component of using the Web – the concept of mash-ups. Amazon and Google use a similar view of the flexibility of information in their Associates programmes – increasing revenues by allowing specialist to access databases and the facilities to purchase goods through links to Amazon and Google sites.

For Shoothill, the data is provided by the EA but the use to which it is put, and the value added to that novel use, is provided by Shoothill. Locating the flood warnings in a map may seem obvious but it takes specialist skills and time to do this, particularly in beign able to update the inforamtion in real-time. Shoothill uses the existing information in an innovative way adding value to the data in terms of how people can use and interpret it. Such innovation would not be possible without access to that information. This may seem like an odd view of data and information but within the Web environment, the value of information does not necessarily lie in keeping it the private and the exclusive property of one company or organization. The value of information can be released or expanded by allowing others to access it and to use it in a manner that may not have been envisaged by the information generators. Both parties can gain as Amazon and Goggle have already figured out!

Tuesday, April 3, 2012

The Two-Tier Haddon Matrix

An interesting extension and alternative to the Haddon Matrix is suggested by Mazumdar et al. (2007) (http://www.ciop.pl/21107). They are concerned with aiding the understanding and prevention of operational hazards at a large construction site. The standard Haddon Matrix below could be used firstly for analysing a hazard or disaster and, secondly, for identifying how to prevent it – a two-tier structure. In the first matrix, the pre-event consist of risk build-up, the event itself and then the consequences, whilst in the second matrix there would be pre-event risk reduction, event prevention and consequence minimization.




To help understand both these matrices, they also suggest that ‘fish-bone’ diagrams might help to identify and put into context specific actions and behaviours to help understand both how the event happens and how it might be prevented or at least its impact minimized. In some ways this is similar to following a scenario through the Swiss-cheese model outlined in an earlier blog. The higher up the main arrow an action, the earlier on it occurs in the build-up to an event or in the event and post event sequence of actions. Early prevention stops the sequence of events occurring in the first place.

Each of the points made in the fish-bone diagrams and in the matrices can be assigned a reference code that relates that point to a specific event or action. So A1, for example, could be the initial decision of a person to not follow a particular minor safety procedure, A2 is then the event that results because of this, whilst C1 could be supervisory environment that permits such lax practices. This breakdown of events and actions for pre- during and post-event can be carried out along with the associated preventative measures in the second tier of the matrix that would stop these events occurring.


Using this reference code they then build up a cybernetic analysis of the problem (see their paper for the worked example). Leaving aside the mathematical analysis of the relationships the linking together of the events/actions involves, they do provide an alternative way to look at an accident or hazard. The important point is that they identify positive and negative feedback loops in the accident or hazard, the nodes, and are able to link these loops together to form the overall accident or hazard and its outcomes. Using this sort of diagram it is possible to identify how interconnected certain events or actions are; which events or actions provide bridges between feedback loops and which nodes in the network it would most effective to tackle in terms of disrupting or easiest to control the occurrence of the event or hazard.



Friday, September 10, 2010

BP Oil Spill: Content of the Accident Investigation Report

In my previous blog I looked at the context of the report, in this blog I want to look at the content of the report (report available at http://www.bp.com/sectiongenericarticle.do?categoryId=9034902&contentId=7064891). The investigation identified eight key causes that combined to produce the incident. The eight are:
  • The annulus cement barrier did not isolate the hydrocarbons
  • The shoe track barriers did not isolate the hydrocarbons
  • The negative-pressure test was accepted although well integrity had not been established
  • Influx was not recognised until hydrocarbons were in the riser
  • Well control response actions failed to regain control of the well
  • Diversion to the mud gas separator resulted in gas venting onto the rig
  • The fire and gas system did not prevent hydrocarbon ignition
  • The blowout preventer (BOP) emergency mode did not seal the well
For each of these causes some articles assign blame to BP and its contractors (e.g. BBC report - http://www.bbc.co.uk/news/world-us-canada-11230757, Guardian article - http://www.guardian.co.uk/environment/blog/2010/sep/08/bp-oil-spill-report-deepwater-horizon-blame-game). But how did the team investigate the cause within their TOR?

Appendix I of the report outlines the method used: fault tree analysis. Fault tree analysis (FTA) is a standard method of analysing technical failures of systems using Boolean logic to combine a series of lower level or previous events. Originally developed in 1962 to analyse ICBM launch control systems (http://en.wikipedia.org/wiki/Fault_tree_analysis), the analysis starts with the undesired event as the top of the tree and then breaks the possible causes down into subsystems and assesses how these prior causes or initiators could arise. The analysis relies upon experts being able to identify how subsystems and their components fail and how these failures can build up to produce the top event, the undesired event (Figure 1). Once identified, each subsystem can be analysed to assess if it was likely to be the source of failure in the cascade that results in the top event. Failure of lower subsystem can be prevented from producing a cascade of failures to the top event if some intervening subsystem does not fail. The number of possible ways failure can occur increases as the number of subsystems increases. As the number of subsystems reduces as the top event is approached, providing fail-safe systems becomes increasingly important as there are fewer and fewer pathways to failure. The process of creating and analysing a fault tree is systematic and logical and, where information is available, probabilities can even be assigned to specific events within the branches of the tree so a likelihood of an incident can be calculated.


Figure 1 Illustratin of Fault Tree Analysis

Within Appendix I there are four fault trees outlined based on the four critical factors identified by the investigation team; well integrity, hydrocarbons entering well undetected and loss of well control, hydrocarbons igniting on Deepwater Horizon and blowout preventer not sealing the well. These four fault trees are focus of investigation and are the context in which all evidence is collected and evaluated. The team assigned each box as a possible contributing factor to be investigated, designating, where possible, if the box represented a ‘possible immediate cause’ or a ‘possible system cause’. Simplistically, ‘possible immediate cause’ can be equated to mechanical or technical failure, whilst ‘possible system cause’ can be equated with failure of communication, human mistakes of interpretation and procedures. In addition, within each box there was either a reference to a specific section of the report for further discussion, a statement that evidence ruled out that cause or a statement that the evidence was inconclusive for that cause.


Figure 2 Illustration of branches of fault tree associated with well intregity

Figure 2 illustrates a subsection of the fault tree for well integrity. This subsection of the FTA shows that more details are available in the appropriate section of the full report, but for this branch of the fault tree, all the possible causes can be ruled out based on the evidence collected. Figure 3 shows the end branches for a section of the fault tree and in this case the interaction between ‘immediate possible cause’ and ‘possible system causes’ illustrates that it is not a simple answer of either mechanical or system failure but more likely to be a complicated combination of both as you analyse the branches. Both these figures are not chosen to point to the most important cause but rather to illustrate the reasoning behind the conclusions and recommendations of the investigation team.



Figure 3 Illustration of end branches of fault tree showing possible immediate and possible system causes

The investigation team used the Swiss-cheese model to illustrate how the four critical factors and eight causes were related (Figure 4). The barriers are the defensive physical and operational barriers that were meant to prevent an incident. Although the figure makes the key relationships easier to understand it does not show the intricate web of relations that tied all the actants, physical and human, together in the complex system that produced the event. The figure does not show the web behind the barriers nor how the barriers are defined and set up in the first place. As I said in an earlier blog, experience tends to influence what is seen as important for operation and for prevention; a new incident can alter this perception and so alter what is regarded as important for different barriers and may even identify new barriers to consider in new environments or contexts. Many of the recommendations made are aimed at improving the links and flow of information between the human actants in the system to ensure that information derived about the physical actants, such as well pressure, is interpreted in a consistent and appropriate manner and that it is clear what actions should be taken and when. Likewise, the investigation highlighted the need for information flows about the state of these actants needs to be improved, such as the condition of critical components in the yellow and blue control pods for the BOP, are maintained at the standard required for them to operate correctly.

Figure 4 Illustration of Swiss cheese model of hazards analysis based on Deepwater Horizon report

Official publications associated with the disaster are:




The US Fish and Wildlfie Service have produced this publication:




Whilst other books that explore the spill and its legacy and legal aspects include:




BP Oil Spill: Accident Investigation Report

BP released the report of its internal investigation team on the Deepwater Horizon accident on 8th September 2010 (http://www.bp.com/sectiongenericarticle.do?categoryId=9034902&contentId=7064891). Media coverage of the report has made much of the alleged attempts to divert blame for the accident onto other companies; contractors involved in the operation and maintenance of the oil rig (e.g. Who’s blamed by BP for the Deepwater horizon oil spill - http://www.bbc.co.uk/news/world-us-canada-11230757, BP oil spill report: the Deepwater horizon blame game- http://www.guardian.co.uk/environment/blog/2010/sep/08/bp-oil-spill-report-deepwater-horizon-blame-game, BP oil spill: US reaction to the BP report - http://www.telegraph.co.uk/finance/newsbysector/energy/oilandgas/7990442/BP-oil-spill-US-reaction-to-the-BP-report.html). Robert Preston, the BBC’s business editor even dubbing BP as standing for ‘Blame Placing’ (http://www.bbc.co.uk/blogs/thereporters/robertpeston/2010/09/bp_stands_for_blame_placing.html). This blog looks at the report in context; a second blog will deal with the findings themselves.

The report itself is at pains to point out that it is own limitation. The second paragraph of the executive summary, for example, states:
“In preparing this report, the investigation team did not evaluate evidence against legal standards, including but not limited to standards regarding causation, liability, intent and the admissibility of evidence in court or other proceedings.”
Deepwater Horizon Accident Investigation Report: Executive Summary, 2010, p.2.

The report notes it had to work with the information available to it and draw interpretations from sometimes contradictory, unclear and uncorroborated evidence that used the ‘best judgement’ of the team, but from which others might draw different conclusions. The report even finishes with a section on what the team could not analyse. So is the report a PR exercise, an attempt to deflect blame or a genuine attempt to provide some rapid, informative answers to the questions about what caused a major environmental disaster.

The report needs to be considered in the light of how industries operate in the contemporary economic environment. I am not concerned with legal definitions of responsibility nor intend to discuss these or get into such a debate as I am sure that such heated deliberations will ensue once money comes to the fore. Robert Preston’s blog is useful in illustrating the ‘hollowing out’ aspect of modern large companies such as BP. Companies no longer do everything; contracting out aspect of their industry that they are either not good at or that other companies can do better or more cheaply has become a common practice. BP may be an oil company but it does not undertake every aspect of the oil industry in house.

Robert Preston’s blog provides a good analogy of a dodgy chicken tikka masala bought from a supermarket. If you are ill after eating the meal do you blame the supermarket or its contracted manufacturers? He states that most people would hold the supermarket accountable although the contracted company may have had the sloppy hygiene standards that produce the dodgy meal. In his blog he does point out that BP were the named party on the relevant oil lease and so assumed to exercise sufficient oversight.

My view is that the example is a little too simplistic to grasp the complexity of relationships that define a modern business enterprise. Imagine instead that you want to get to work every day to do what you are good at. You are not good at driving nor want the expense of owning a car, so you contract out both hiring a driver and leasing a car. You specify that you need a driver who can take orders and a car that is a reasonable car for your status. You tell the driver you leave a 08:10 and must be at work at 08:30. Everything seems to run smoothly, the driver is well turned out, the car is comfortable and you get to work on time. One day there is an accident as the car overturns taking a corner – who is to blame? It may seem simple, the driver is to blame, he was driving – he is the person immediately, obviously involved in the accident, its cause. BUT you specified the time; he has to drive to ensure you get there on time. Is it the pressure you put him under that caused the accident? Furhter investigation points to some mechanical problems with the brakes. Not enough on its own to cause the accident but a possible contriubting cause. The car is maintained by the leasing company, who are good at leasing but not at maintenance so contract that out. But you specified only a standard maintenance contract,you didn't specify that there would be undue wear on the brakes as you don't drive so don't know how different driving styles affect brake wear. The subcontractor states it is nothing to do with them as they maintained it to the standard specified. Where does the cause lie? With mechanical problems, with your communciation with your contractors, with your ability to specify exactly what you require or with your understanding of the context?

I hope you can start to see the problem. Such a complex web of relationships requires careful and thoughtful planning and overseeing. Relations and specifications need to be established carefully and maintained. Importantly, you may not realise there is a problem with the relations or specification until there is a problem. The problem itself highlights the errors, by which time it is too late. This does not absolve you of blame it just shows how difficult it is to pin down exactly who or what is the cause. Causation and blame may be different things entirely.

It is within this context of devolved tasks that the investigation team undertook the report. Central to this report, in fact any report, are the terms of reference, TOR, found in Appendix A of the report. The scope of the report is defined as finding facts surrounding the uncontrolled release of hydrocarbons and efforts to contain that release aboard the Transocean drillship Deepwater Horizon. More specifically, the team will determine the actual physical conditions, controls and operational regime related to the incident to understand a) the sequence of events, b) the reasons for initial release, c) the reasons for fire, d) efforts to control flow at the initial event. As well as a timeline for the event itself, the team were also tasked to described the event and identify critical factors, both immediate causes and system causes. As with any TOR, the terms are narrower than you might want to try to understand the event in totality and, as is common with such an event, the key focus is on the technical and procedural. The team are not tasked to apportion blame within their TOR, they are merely seen as reporting ‘the facts’. Clearly, ‘the facts’, as in people’s actions and recollections, depend on what they are told and upon who tells them, what hidden agendas each person might have. Instruments and equipment, where available, tell another set of stories which may at first seem more objective but once different experts begin to interpret the information may become almost as ambiguous as the recollections of fallible humans.

The focus on the initial release and the events leading up to the explosion of necessity spotlights the actions of individuals in the decision making at that time. Despite this, a number of issues concerning equipment, maintenance and instructions are highlighted as requiring improvement suggesting that systemic factors may be more important. In other words, the communication and relations between companies is as much at the heart of the event as the faulty decisions made at the time.

Interestingly, the investigation team had 5 specific terms of reference associated with administration, including the sanctioning of all activities by a team leader, the requirement of a BP person at each interview and on questions or tasks to BP contractors without BP approval. The impact of such administrative arrangements on the nature or scope of the questions asked is not discussed. How are these administrative requirements to be interpreted? As a standard implementation of policy in such investigations, as a check on the team adhering to the TOR or ensuring the TOR were clarified to the team when required? Your interpretation may depend on the degree of belief or trust in have in the internal report in the first place.

The complexity of the task of assigning causation and blame is highlighted by the team in the Executive Summary:
'The team did not identify any single action or inaction that caused this accident. Rather, a complex and interlinked series of mechanical failures, human judgments, engineering design, operational implementation and team interfaces came together to allow the initiation and escalation of the accident. Multiple companies, work teams and circumstances were involved over time.'
Deepwater Horizon Accident Investigation Report: Executive Summary, 2010, p.5.

But why produce and release the internal report to the public now? There are other reports in the pipeline, not the least the official report into the incident that will presumably form the basis for blame, responsibility and one would assume compensation claims. BP may be trying to show themselves as a responsible company, but there is also the possibility that they are putting the report out there as a marker, an anchor for further reports. Whatever the status of the BP internal report, it is now known and available, it provides information and interpretations that any other report will be compared to. BP have provided an anchor or a starting point for expectations. Other reports will need to refer to it, to agree or disagree with it, to confirm or reject its findings and assertions. BP might not have defined the agenda for the debate over responsibility that will develop but they have defined the starting points and details that all other reports will have to cover; so not a bad start to agenda setting.

Friday, September 3, 2010

Haddon Matrix and Hazardous Events

Looking at hazards in different ways, through different conceptual frameworks is always useful as it tends to make you think about things, however slightly, in a different way. A framework often used in injury prevention, in road accident research and public health is the Haddon Matrix. This was devised by William Haddon back in the 1970s for use in road traffic accidents. The basic matrix is divided into 12 cells. The rows are defined by the temporal aspect of the event; pre, during and post, whilst the columns are defined as ‘host’ (you could rethink this as ‘the individual’), ‘equipment’ and two for environment; one for ‘physical’, one for ‘social’. The idea is to fill in each of the cells with key aspects that will influence or did the hazardous event. Effectively you are playing out different scenarios and filling in the cells depending on what factors you see as significant in each scenario. The framework forces you to deal systematically with the nature of the hazard and how it might play out in reality.


The example provided is for road traffic accidents but the basis can be translated to other types of hazard. In the crash, the condition of the individual before the crash may be important for the reasons in the matrix. Each individual will have different characteristics that could be important and each can be included as appropriate. Simiarly, different aspects of the equipment will be important depending on the nature of the crash and so these factors may not be clear until after the event. The environmental factors, seem to be more diffuse and provide a context, that for certain types of individual behaviour and certain equipment failings produce an environment conducive to a hazardous event. Importantly, despite the descirption and divsion of the event into these spearate cells, the contents of each cell depends upon the relationships between the host, equipment and environment. Fro eample, the scoial norms that permit DUI, would not be improtant had not the host not had a seatblet and been drinking. The poorly designed fuel tanks only become significant when the drunk driver crahse and so on.



This framework does have its limitations. The recognition of important factors can be so wide ranging as to be useless in planning if extreme scenarios, with infinitesimal probabilities of occurring are considered. On the other hand, it may not be until the event happens that it becomes clear what factors are important. The matrix will probably be of most use when similar hazardous events are being considered, as similar events would be expected to have roughly similar important factors. The matrix can also be used to identify where particular factors are not relevant. In a pile-up on a foggy motorway, for example, the detailed life history of the individual in the second car in the crash may not have any significance to their survival, it is the general physical conditions that are of over-riding significance. Equipment factors, such as airbag installation, age of car, may have an impact however. In other words, the matrix might be useful to explore the topographies of different hazards or disaster; in exploring the nature or shape of the hazard and what factors dominate that landscape and which are incidental ‘bumps’ on the terrain (please excuse the landscape metaphor, but I am a physical geographer!)

Something useful might be gained by overlaying the matrix with the Swiss cheese model of Reason outlined in an earlier blog. The matrix framework helps to identify the factors that might be important at each stage; the Swiss cheese identifies if a particular trajectory of factors lines up to produce a disaster. The matrix helps identify the possibles, the Swiss cheese, whether these possibles are important in combination. In the case of the BP oil spill, for example, the Haddon matrix could be used to identify key pre, during and post disaster factors, such as the alleged failure in safety procedures and lack of disaster planning. The trajectory arrow of the Swiss cheese model can then be used to assess if this one failure affects the next layer, if one failure or factor then lines up with another to produce the cascade of errors that result in a disater.

Some potentially useful books for assessment of hazards of injury are:

Injury Prevention in Children by David Stone (2011)



Injury Control: A Guide to Research and Program Evaluation by Rivara et al. (editors) (2009)




Injury Epidemology: Research and control strategies by Leon Robertson (2007)
 



HAZARDS AND RISK

Risk is a tricky thing to pin down and, as with most things in hazards analysis, open to a wide range of interpretations. A useful website for discovering just how open to debate this term is is John Adams website (http://john-adams.co.uk/). We all encounter risk everyday. The financial markets have just collapsed under the weight of risks. We drive along the motorway aware of the risk of other drivers (at least I hope everyone else does as I do!). We weigh up the risk to our health, the length of our life, of another drink, another cigarette, another burger or pie. Or do we?

Risk can seem to be such an easy thing to define. You can work out the probability of something occurring- the probability of dying from smoking a specific number of cigarettes per day, the probability of a specific amount of alcohol per day giving you cancer, the probability of contracting cancer given a specific level of exposure to radiation. Trouble is people often act as if they don’t know these probabilities exist.


Risk can be defined accurately, mathematically and scientifically using statistical analysis. Risk can be defined as the chance of a particular defined hazard or event occurring. If you now the frequency of occurrence of a particular level of flow in a river then you can work out the probability in any one year that there will be a flow of a given magnitude. Leaving aside problems of how long does the record of flows need to be to be representative, how well are extreme events represented in that record and many other factors, the key point is that, in theory, risk can be calculated from such records. Risk can be given a number: a fixed value that informs people and what they should do. Btu why should risk bother you? Risk only becomes important because you feel you might have something to loss. Risk can only be defined in relation to loss, so only within a context of fear or loss. It can be refined as a simple equation of:


RISK = [Hazard (probability) x Loss (expected)]/ Preparedness (loss mitigation)


You can see how each bit of the equation could be given a number. The hazard from scientific analysis of the geophysical nature of the hazard or rather the probability of the hazardous event. Loss can be calculated by the amount of money you would need to replace what you could lose if the event happened. Preparedness, more tricky, but maybe how much you can pay to insure against you loss by that hazardous event. But is this all risk really is?

There are other ways of looking at risk.
  • Risk can be defined as being a real thing, out there and so subject to scientific and mathematical analysis and calculations that are common across experts.
  • Risk can be seen as a cultural and social phenomenon created by the society we live in and so subject to change as that society changes
  • Risk can be defined legally as a responsibility or a failure of expected conduct
  • Risk can be defined psychologically as a set of behaviours and understandings about the world
  • Risk can be defined within the humanities as an emotional phenomena and as a story or narrative

    Each of these different definitions illuminates different aspects of risk and may ring true with individuals in different circumstances. When watching news reports about floods in Pakistan, for example, I am seeing risk as a story or narrative dictated by the media and its beliefs about how I expect the disaster to unfold. Never underestimate the tight constraints of such storylines in affecting how we see things. Risk of injury on a building site could be viewed through the lens of legal definitions of risk and responsbility. The reactions of individuals to flooding and flood risk could be viewed through the lens of psychology. Some people believe in the risk and insure, others don’t and save their money – are the first group risk averse, the second risk takers or is it more complicated than that?

    There is another way of defining risk, either singularly or in combination.

    • Real: the calculation approach as above plus objective below
    • Objective: the risk is real, a thing and it is out there for us to study and quantify
    • Observed: Risk we can measure given our particular view of the world (and given it is real and objective)
    • Subjective: Risk is about mental states of individuals who are only human and so plagued by fear, worry, uncertainty and doubt
    • Perceived: subjective estimate of risk by individual or group

    I would argue that all risk is perceived and that risk tends to be defined by the judgements of people, singularly and in groups, based on their application of some knowledge or information about the uncertainty involved, where this knowledge or information is objective, observed or subjective. When we believe or precieve the risk to be generated by some real, physical phenomenon then we can meausre it and calculate risk. This does not mean others will share our view of the world as objective nor our view of risk as soemthing objective.

    What this means is that the perception and belief of risk varies from individual to individual, from group to group, from place to place and even from event to event. Trying to model or generalize about the actions of individuals in the face of risk is difficult but in future blogs I hope to present some models and general ideas about how people have tackled this complicated problem of understanding how people perceive and react to risk.

Hazards: The Complexity Approach

I was at a conference on water and risk at the start of the year. A very distinguished professor had delivered an extremely interesting talk on a key concept he had taken decades to get across to policy makers involved in development. As the presentations went on he became increasingly concerned and agitated that researchers should realise that they had to get their ideas across to policy makers who were not well versed in either the details of academic debate or the intricate nature of conceptual frameworks. He questioned a final year postgraduate about what the major new concept in hazard and risk was. Without hesitation the postgraduate replied ‘complexity’. The distinguished professor paused, audibly drew in a long breath and said ‘God help us!’

Complexity is, as the name implies, neither the easiest concept to get across nor the easiest one to illustrate. Part of this fogginess is because the concept is still evolving within hazard analysis. The fixed and clear definitions of what it is and how to use it are still in their infancy and still subject to intense academic debate (although a useful discussion of the concept is given in Smith and Petley, Environmental Hazards, 2009, Routledge). Different researchers from different field converge on a particular disaster and each applies their own view and meaning of complexity to the analysis of that disaster. So what follows is a partial interpretation of complexity thinking and hazards but one that I hope will nonetheless provide a flavour of how a new concept is starting to mesh and enhance hazards analysis.

Complexity theory is a borrowed, as most geographical concepts are, this time from physics and mathematics where it evolved from a detailed equation based theory into something that even geographers could begin to understand. The central idea is that a system of components operating together produces some output. This may not sound that dramatic but it is the type of output that is a little unexpected. Traditionally, it has tended to be assumed that you can understand something better if you start to pull it apart and study each component one by one, individually. Once you have a detailed knowledge of the components then you have a detailed knowledge of the system. Simplify the system to understand it. This is a highly reductionist view of reality and how you go about studying it. To understand a car you dismantle it and study each component in great detail then put the parts back together and you understand the car. Even with my limited mechanical knowledge, I can see this will not work! Complexity is a brake on this view of simplifying reality to study it.

Complexity recognises that real systems are complicate and intricate networks of components acting together in a variety of ways. Simply studying one component or even a small group, does nothing to help us understand how the system really works. It is the interactions, the relations, which drive the system that produce the emergent behaviour that we observe and try to study. In complexity theory the bits of the system, the actual components are still vital as without them the system would not exist, but to understand the system, to grasp how it works, it is the interactions, the relations and their changes that it is vital to understand. From these interactions there does tend to emerge some predictable forms of overall system behaviour. Sometimes, however, change the relations and the output can alter in unexpected and unpredictable ways. In this view of hazards, hazards and disasters occur not necessarily because of one factor but through a combination of and the complex interactions of a number of factors.

My earlier blog on the BP oil spill and the Swiss cheese model of hazards could be seen as an illustration of complexity in action. In this model, it is the interaction between specific ‘holes’ that results in the incident occurring, without this interaction there would be no incident, no explosion, no oil spill. This model is only one by which hazards can be understood however. My earlier blog on ash cloud again focuses on interactions, this time amongst a group of actants to start to form an understanding of how the system evolved and how the hazard itself became defined.

The compelxity approach is outliend albeit briefly in Petley adn Smith's book below.



Hazards and Vulnerability

Media reports and images are full of vulnerable people being struck by disasters. Film of families being rescued by inflatable boat in Pakistan has been common staple on recent news reports. When Hurricane Katrina smashed into New Orelans, it appeared the most vulnerable people in a developed country were being targeted by the disaster. It seems so clear but what do we actually mean by vulnerable? Leading on from this question is another important one. If we can define does this help us take steps to ensure these people are not affected by such hazards and disasters?
In a previous blog (Floods in Pakistan: Vulnerability) I began to discuss the complex nature of any definition of vulnerability and illustrated some of the issues using this ongoing disaster. If you want a simple definition then vulnerability can be defined as the potential for loss of life or property in the face of environmental hazards or environmental disasters (or indeed any hazard or disaster). Loss susceptibility is another term often used in relation to vulnerability. Other definitions include vulnerability as a threat to which people are exposed; vulnerability as the degree to which a system acts adversely to a hazard (whatever adverse might mean?!); differential risk for different social classes; interaction between risk and preparedness; inability to take effective measures; capacity of group to anticipate, cope with, resist and recover from impact of natural disaster. There are others and any one interested in the range of definitions used should have a look at Susan Cutter’s book, Hazards, vulnerability and environmental justice (2006, Earthscan). A key point to bear in mind is both the physical and human environments can be vulnerable. Physical systems can be fragile and susceptible to impacts as much as human systems. Outlining how these can be studied together will be the subject of a future blog. This blog will focus on social vulnerability, the vulnerability of the human part of the equation, rather than physical vulnerability.
Some other terms borrowed from ecology also tend to be used when researching vulnerability. Adaptation refers to the ability of the actants in the socio-ecological system to find strategies to adapt to the hazard or disaster. Resistance is the ability of the actants to resist the impact of the hazard or disaster. Resilience is the ability of the system to absorb, self-organise, learn and adapt to the hazard or disaster. A useful resource for vulnerability can be found at the web pages of Neil Adger (http://www.uea.ac.uk/env/people/adgerwn/adger.htm) and at the Resilience Alliance website (http://www.resalliance.org/1.php) a site looking at research into resilience of socio-ecological systems and sustainability. As with most things borrowed, once you change the context the meaning changes as well, so the application and use of these terms does not necessarily match their original, potentially more limited definitions in ecological research.
An important aspect of vulnerability is that it evolves; it changes as the nature of the disaster or hazard unfolds and as the people who are vulnerable response and react to their situations. This also highlights the importance of scale for defining vulnerability. What scale is appropriate? The individual can be viewed as an important unit, but the individual usually operates within the context of a family or household, so is this a more appropriate unit for analysing vulnerability and resilience? What about larger entities such as communities and governments? As you change the unit of analysis would you expect the different units to have the same type of vulnerability, the same ability to resist or the same characteristics of resilience? Once these different spatial entities interact, such as the provision of aid by the government to individuals, does this cross scalar interaction affect vulnerability and resilience? In other words, what seems like a simple thing is very complex to unravel in detail.
At heart vulnerability is about the differential ability or power to access resources by individuals and groups in society. To escape a flood you need the power or ability to get out of the area. You need a car, you need early warning, you need a friendly policeman to wave you through and protect you from the other people trying to escape on foot. These material things require resources and access to them at the appropriate time. There are static and dynamic aspects to this access to resources. The static aspects of vulnerability, might be capable of identification before a disaster strikes. At the simplest level, mapping socioeconomic groups gives an indication to the availability of funds to gain access to resources. Likewise, mapping similar census data such as lone parent numbers or age (elderly and young are less able to escape floods for example) could also indicate the vulnerability of a place. A useful site that discusses such mapping and has developed a specific means of measuring it, the index of social vulnerability, can be found at the University of South Carolina at the Hazards and Vulnerability Institute (http://webra.cas.sc.edu/hvri/) of which Susan Cutter is the Director.
There is also a dynamic aspect to vulnerability; the manner in which relationships are organised and the manner in which they change through normal times and then during and after a disaster. Such flows could include the transport infrastructure; a key aspect that appears to have failed during this disaster and which has dramatically affected the ability of the institution of government to maintain an effective relationship with vulnerable groups. At a local level, however, if the transport infrastructure that remains intact sufficient for the local population to move to safety and then initiate community based activities that represent resilience at that level? Importantly this dynamic aspect is concerned with pathways and relations, both physical between locations and places and social and emotion between peoples and between individuals and organizations. From the above it is clear that trying to understand vulnerability also means trying to udnerstadd its geography; how it varies in space and time and how people succumb to, adapt or try to overcome this geography.

Two useful books on vulnerabiltiy are:

Measuring Vulnerabiltiy to Natural Hazards: Towards Disaster Resilient Socieites by the United Nations University (2007)






Hazards Vulnerability and Environmetnal Justice by Susan Cutter (2006)