Friday, March 8, 2013
Haddon Matrix and ‘Black Swans’
Tuesday, February 5, 2013
Antifragility and actor networks
Thursday, November 1, 2012
Mapping Hurricane Sandy
Integrating data sources and providing maps of informtion that is of use to a range of users is another, more difficult task. One of the key players in online mapping has been Google Interactive Maps ( http://google.org/crisismap/sandy-2012). The map for Hurriance Sandy is produced under the umbrella organsiation of Google Crisis Response (http://www.google.org/crisisresponse/) a Google project that collaborates with NGOs, government agencies and commerical organisations to provide information on things such as storm paths and emergency faciltities. Integrating these data sources to provide spaitally located information of use to responders, locals affected by the hazard, interested news readers and authorities involves a clear understanding of the requirements of each target audience and clear planning of the nature of information presentation for each of these. It is worth having a look at the interactive maps to assess for yourself whether this complex task has been achieved. In particular look at the type of information provided about particular resources, the nature of the resources mapped and the scale or level at which this information is displayed and to which part of the audience you beleive this information will be of use (why you think so is the next question).
It is also useful to compare this structured information with the less structured flow of information from Twitter (http://www.guardian.co.uk/news/datablog/2012/oct/31/twitter-sandy-flooding?INTCMP=SRCH). This analysis by Mark Graham, Adham Tamer, Ning Wang and Scott Hale looked at the use of the term 'flood' and 'flooding' in tweets in relation to Hurricane Sandy (see also blog at http://www.zerogeography.net/2012/10/data-shadows-of-hurricane.html. Although they were initially assessing if there was a difference in English language and Spanish language tweets about the storm, their analysis pointed out that tweets were not that useful at providing information aobut the storm at a spatial resolution of smaller than a county (although it isn't clear to me at least if they were mapping the location of the tweets or the contents of the tweets - in some cases the latter might provide some more detailed spatial information on flooding and its impact but would require extraction and interpretation from the tweet itself). This lower spatial resolution and the unfiltered, personal, subjective and unco-ordinated nature of this information source means that it is more difficult to quickly translate into information that is 'useable' by other audiences. Mark Graham is a researcher at the Oxford Internet Institute (http://www.oii.ox.ac.uk/) and his webpage (http://www.oii.ox.ac.uk/people/?id=165) and blog are definitely worth look for anyone interested in mapping and internet and mobile technologies.
Monday, October 29, 2012
The L’Aquila and Legal Protection for Scientists
Transferring this type of policy into the academic realm is not unusual, several of my colleagues have such policies when they undertake consultancy work and a lot of universities commercial branches offer such cover. A key question that should be asked is, is the nature of the information being provided the same for all these professions or is the scientific information of a different type? Is it the information that is the same or is it the intended use of that information that requires the provider to have legal protection? If I were an engineer advising on a building project, the audience, the investors, the builders, etc, employ me to ensure that the result is satisfactory – the building stays up (simplistic but you get the idea). There is a definite, time limited and clearly defined outcome that my advice is meant to help achieve. Is this the case for scientific advice about the possibility of a hazarduous event? Is the outcome clearly defined or is there variability in the expectations of the audience and those of the information provider? Aren’t experts in both cases offering their best ‘guesses’ given their expertise and standards in specific areas?
The development and (by implication from Pritchard) the almost essential nature of legal protection for people giving advice tells us a lot about current attitudes or beliefs about science and prediction. Pritchard quotes David Spiegelhalter, Professor of Public Understanding of Risk at the University of Cambridge as stating:
“At that point you start feeling exposed given the increasingly litigious society, and that's an awful shame…. It would be terrible if we started practising defensive science and the only statements we made were bland things that never actually drew one conclusion or another. But of course if scientists are worried, that's what will happen."
The belief that science can offer absolute statements concerning prediction underlies the issue at L’Aquila. Despite the careful nature of the scientific deliberations, the press conference communicated a level of certainty at odds with the understanding of seismic activity and at odds with the understanding of the nature of risk in seismology. Belief that seismic events are predictable in terms of absolute time and location is at odds with what is achievable in seismology. By extension this view believes that scientists understand the event and understand how it is caused and so understanding causation leads to accurate prediction. This ignores the level and nature of understanding in science. Scientists build up a model, a simplification of reality, in order to understand the events, the data that they collect. This model is modified as more information is produced but it is never perfect. The parts of the model are linked to one and other by the causes and processes the scientist believe are important, but these can be modified or even totally discarded as more events, more information is added. So it is feasible to understand, broadly, how seismic events occur without being able to translate this understanding to a fully functional and precise model of reality that can predict exactly when and where an earthquake will occur.
If scientists are held legally to account for the inexact nature of the scientific method then there are major problems with any scientist wanting to provide any information or advice to any organisation.
Communicating the inexact nature of our understanding of reality, however, is another issue. If the public and organisation want accuracy in predictions that scientists know is impossible then the ‘defensive science’ noted by Spiegelhalter, will become the norm in any communication. Bland statements of risk of an event will be provided and, to avoid blame, scientists and their associated civil servants will always err on the side of caution, i.e. state a risk level that is beyond the level they would state to colleagues. Even this type of risk communication carries its own risks – stating it will rain in the south of England on a bank holiday could deter visitors to the seaside and when that happens then couldn’t business in coastal resorts sue or provide their own information (Bournemouth launches own weather site - http://news.bbc.co.uk/1/hi/england/dorset/8695103.stm )? If the reports conflict then who should the public believe?
Bland science implies communication that scientists perceive to be of least risk to them personally. This could vary from person to person and from institution to institution so the level of ‘risk’ deemed acceptable to communicate as ‘real’ to the public will begin to vary.
There is no easy answer to this issue and whilst there isn’t then legal protection sounds a reasonable way to go if you want to make your scientific knowledge socially relevant. It may, however, be worth thinking about the ideas scientists try to transmit as messages. Three simple things then spring to mind: what is the transmitter? what is the message and what is the audience? The scientist (transmitter) will have their own agenda, language and views on the nature of the message. The message itself will be communicated in a specific form along specific channels, all of which can alter its original meaning or even shape its meaning. Likewise, the audience is not a blank, passive set of receivers – they have their own views, agenda and will interpret the message as such. More time spent understanding how the scientific message is communicate may help to ensure that the message is interpreted by the audience(s) in the way the scientist intended.
Wednesday, August 22, 2012
Global Death Toll From Landslides
"Areas with a combination of high relief, intense rainfall, and a high population density are most likely to experience high numbers of fatal landslides"
For information on the paper go to the BBC report or to Dave's landsldie blog
Wednesday, July 25, 2012
Institute of Hazard, Risk and Resilience at the University of Durham
An important research project for the Institute is Leverhulme funded project on ‘Tipping Points’. Put simply ‘tipping points;’ refer to a critical point, usually in time, when everything changes at the same time. This idea has been used in describing and trying to explain things as diverse as the collapse of financial markets and the switches in climate. The term ‘tipping point’ (actually ‘tip point’ in the sutdy) was first use in sociology in the 1957 by Martin Grodzins to describe the ‘white-flight’ of white populations from neighbourhoods in Chicago after a threshold number of black people moved into the neighbourhood. Up to a certain number nothing happened, then suddenly it was as if a large portion of the white population decided to act in unison and they moved. That this action was not result of co-ordinated action on the part of the white population suggested that some interesting sociological processes were at work. (Interestingly, I don’t know if the reverse happens or if research has been conducted into the behaviour of non-white populations and their response to changing neighbourhood dynamics). Since about 2000 the use of the term tipping point has grown rapidly in the academic literature, a lot of the use being put down to the publication in 2000 of ‘The Tipping Point: How little things can make a big difference’ by the journalist Malcolm Gladwell (who says academics don’t read populist books!)
Tipping Point by Malcolm Gladwell
Saturday, July 21, 2012
Media and Hazards: Orphaned Disasters
Monday, July 9, 2012
UK Real-time Flood Alerts Online - Using Information in Novel Ways
The site is worth a visit but it does beg the question, particularly as the unseasonably weather continue in Britain and elsewhere – what does this company add to the existing EA site that makes it more useful? The EA flood warning front page (http://www.environment-agency.gov.uk/homeandleisure/floods/31618.aspx) shows a map of Britain that you can click on by region and then text information on flood warnings including locations are provided. Clicking further through the individual warning locations provides more detailed information. The Shoothill site provide the same information if you click on the symbol on the map.
The answer seems to be that the Shoothill site provides the information visually linked to a map. Is this such an advance? It seems to be and it indicates a key component of using the Web – the concept of mash-ups. Amazon and Google use a similar view of the flexibility of information in their Associates programmes – increasing revenues by allowing specialist to access databases and the facilities to purchase goods through links to Amazon and Google sites.
For Shoothill, the data is provided by the EA but the use to which it is put, and the value added to that novel use, is provided by Shoothill. Locating the flood warnings in a map may seem obvious but it takes specialist skills and time to do this, particularly in beign able to update the inforamtion in real-time. Shoothill uses the existing information in an innovative way adding value to the data in terms of how people can use and interpret it. Such innovation would not be possible without access to that information. This may seem like an odd view of data and information but within the Web environment, the value of information does not necessarily lie in keeping it the private and the exclusive property of one company or organization. The value of information can be released or expanded by allowing others to access it and to use it in a manner that may not have been envisaged by the information generators. Both parties can gain as Amazon and Goggle have already figured out!
Tuesday, April 3, 2012
The Two-Tier Haddon Matrix
To help understand both these matrices, they also suggest that ‘fish-bone’ diagrams might help to identify and put into context specific actions and behaviours to help understand both how the event happens and how it might be prevented or at least its impact minimized. In some ways this is similar to following a scenario through the Swiss-cheese model outlined in an earlier blog. The higher up the main arrow an action, the earlier on it occurs in the build-up to an event or in the event and post event sequence of actions. Early prevention stops the sequence of events occurring in the first place.
Each of the points made in the fish-bone diagrams and in the matrices can be assigned a reference code that relates that point to a specific event or action. So A1, for example, could be the initial decision of a person to not follow a particular minor safety procedure, A2 is then the event that results because of this, whilst C1 could be supervisory environment that permits such lax practices. This breakdown of events and actions for pre- during and post-event can be carried out along with the associated preventative measures in the second tier of the matrix that would stop these events occurring.
Using this reference code they then build up a cybernetic analysis of the problem (see their paper for the worked example). Leaving aside the mathematical analysis of the relationships the linking together of the events/actions involves, they do provide an alternative way to look at an accident or hazard. The important point is that they identify positive and negative feedback loops in the accident or hazard, the nodes, and are able to link these loops together to form the overall accident or hazard and its outcomes. Using this sort of diagram it is possible to identify how interconnected certain events or actions are; which events or actions provide bridges between feedback loops and which nodes in the network it would most effective to tackle in terms of disrupting or easiest to control the occurrence of the event or hazard.
Friday, September 10, 2010
BP Oil Spill: Content of the Accident Investigation Report
- The annulus cement barrier did not isolate the hydrocarbons
- The shoe track barriers did not isolate the hydrocarbons
- The negative-pressure test was accepted although well integrity had not been established
- Influx was not recognised until hydrocarbons were in the riser
- Well control response actions failed to regain control of the well
- Diversion to the mud gas separator resulted in gas venting onto the rig
- The fire and gas system did not prevent hydrocarbon ignition
- The blowout preventer (BOP) emergency mode did not seal the well




Figure 4 Illustration of Swiss cheese model of hazards analysis based on Deepwater Horizon report
Official publications associated with the disaster are:
The US Fish and Wildlfie Service have produced this publication:
Whilst other books that explore the spill and its legacy and legal aspects include:
BP Oil Spill: Accident Investigation Report
The report itself is at pains to point out that it is own limitation. The second paragraph of the executive summary, for example, states:
“In preparing this report, the investigation team did not evaluate evidence against legal standards, including but not limited to standards regarding causation, liability, intent and the admissibility of evidence in court or other proceedings.”
Deepwater Horizon Accident Investigation Report: Executive Summary, 2010, p.2.
The report notes it had to work with the information available to it and draw interpretations from sometimes contradictory, unclear and uncorroborated evidence that used the ‘best judgement’ of the team, but from which others might draw different conclusions. The report even finishes with a section on what the team could not analyse. So is the report a PR exercise, an attempt to deflect blame or a genuine attempt to provide some rapid, informative answers to the questions about what caused a major environmental disaster.
The report needs to be considered in the light of how industries operate in the contemporary economic environment. I am not concerned with legal definitions of responsibility nor intend to discuss these or get into such a debate as I am sure that such heated deliberations will ensue once money comes to the fore. Robert Preston’s blog is useful in illustrating the ‘hollowing out’ aspect of modern large companies such as BP. Companies no longer do everything; contracting out aspect of their industry that they are either not good at or that other companies can do better or more cheaply has become a common practice. BP may be an oil company but it does not undertake every aspect of the oil industry in house.
Robert Preston’s blog provides a good analogy of a dodgy chicken tikka masala bought from a supermarket. If you are ill after eating the meal do you blame the supermarket or its contracted manufacturers? He states that most people would hold the supermarket accountable although the contracted company may have had the sloppy hygiene standards that produce the dodgy meal. In his blog he does point out that BP were the named party on the relevant oil lease and so assumed to exercise sufficient oversight.
My view is that the example is a little too simplistic to grasp the complexity of relationships that define a modern business enterprise. Imagine instead that you want to get to work every day to do what you are good at. You are not good at driving nor want the expense of owning a car, so you contract out both hiring a driver and leasing a car. You specify that you need a driver who can take orders and a car that is a reasonable car for your status. You tell the driver you leave a 08:10 and must be at work at 08:30. Everything seems to run smoothly, the driver is well turned out, the car is comfortable and you get to work on time. One day there is an accident as the car overturns taking a corner – who is to blame? It may seem simple, the driver is to blame, he was driving – he is the person immediately, obviously involved in the accident, its cause. BUT you specified the time; he has to drive to ensure you get there on time. Is it the pressure you put him under that caused the accident? Furhter investigation points to some mechanical problems with the brakes. Not enough on its own to cause the accident but a possible contriubting cause. The car is maintained by the leasing company, who are good at leasing but not at maintenance so contract that out. But you specified only a standard maintenance contract,you didn't specify that there would be undue wear on the brakes as you don't drive so don't know how different driving styles affect brake wear. The subcontractor states it is nothing to do with them as they maintained it to the standard specified. Where does the cause lie? With mechanical problems, with your communciation with your contractors, with your ability to specify exactly what you require or with your understanding of the context?
I hope you can start to see the problem. Such a complex web of relationships requires careful and thoughtful planning and overseeing. Relations and specifications need to be established carefully and maintained. Importantly, you may not realise there is a problem with the relations or specification until there is a problem. The problem itself highlights the errors, by which time it is too late. This does not absolve you of blame it just shows how difficult it is to pin down exactly who or what is the cause. Causation and blame may be different things entirely.
It is within this context of devolved tasks that the investigation team undertook the report. Central to this report, in fact any report, are the terms of reference, TOR, found in Appendix A of the report. The scope of the report is defined as finding facts surrounding the uncontrolled release of hydrocarbons and efforts to contain that release aboard the Transocean drillship Deepwater Horizon. More specifically, the team will determine the actual physical conditions, controls and operational regime related to the incident to understand a) the sequence of events, b) the reasons for initial release, c) the reasons for fire, d) efforts to control flow at the initial event. As well as a timeline for the event itself, the team were also tasked to described the event and identify critical factors, both immediate causes and system causes. As with any TOR, the terms are narrower than you might want to try to understand the event in totality and, as is common with such an event, the key focus is on the technical and procedural. The team are not tasked to apportion blame within their TOR, they are merely seen as reporting ‘the facts’. Clearly, ‘the facts’, as in people’s actions and recollections, depend on what they are told and upon who tells them, what hidden agendas each person might have. Instruments and equipment, where available, tell another set of stories which may at first seem more objective but once different experts begin to interpret the information may become almost as ambiguous as the recollections of fallible humans.
The focus on the initial release and the events leading up to the explosion of necessity spotlights the actions of individuals in the decision making at that time. Despite this, a number of issues concerning equipment, maintenance and instructions are highlighted as requiring improvement suggesting that systemic factors may be more important. In other words, the communication and relations between companies is as much at the heart of the event as the faulty decisions made at the time.
Interestingly, the investigation team had 5 specific terms of reference associated with administration, including the sanctioning of all activities by a team leader, the requirement of a BP person at each interview and on questions or tasks to BP contractors without BP approval. The impact of such administrative arrangements on the nature or scope of the questions asked is not discussed. How are these administrative requirements to be interpreted? As a standard implementation of policy in such investigations, as a check on the team adhering to the TOR or ensuring the TOR were clarified to the team when required? Your interpretation may depend on the degree of belief or trust in have in the internal report in the first place.
The complexity of the task of assigning causation and blame is highlighted by the team in the Executive Summary:
'The team did not identify any single action or inaction that caused this accident. Rather, a complex and interlinked series of mechanical failures, human judgments, engineering design, operational implementation and team interfaces came together to allow the initiation and escalation of the accident. Multiple companies, work teams and circumstances were involved over time.'
Deepwater Horizon Accident Investigation Report: Executive Summary, 2010, p.5.
But why produce and release the internal report to the public now? There are other reports in the pipeline, not the least the official report into the incident that will presumably form the basis for blame, responsibility and one would assume compensation claims. BP may be trying to show themselves as a responsible company, but there is also the possibility that they are putting the report out there as a marker, an anchor for further reports. Whatever the status of the BP internal report, it is now known and available, it provides information and interpretations that any other report will be compared to. BP have provided an anchor or a starting point for expectations. Other reports will need to refer to it, to agree or disagree with it, to confirm or reject its findings and assertions. BP might not have defined the agenda for the debate over responsibility that will develop but they have defined the starting points and details that all other reports will have to cover; so not a bad start to agenda setting.
Friday, September 3, 2010
Haddon Matrix and Hazardous Events

The example provided is for road traffic accidents but the basis can be translated to other types of hazard. In the crash, the condition of the individual before the crash may be important for the reasons in the matrix. Each individual will have different characteristics that could be important and each can be included as appropriate. Simiarly, different aspects of the equipment will be important depending on the nature of the crash and so these factors may not be clear until after the event. The environmental factors, seem to be more diffuse and provide a context, that for certain types of individual behaviour and certain equipment failings produce an environment conducive to a hazardous event. Importantly, despite the descirption and divsion of the event into these spearate cells, the contents of each cell depends upon the relationships between the host, equipment and environment. Fro eample, the scoial norms that permit DUI, would not be improtant had not the host not had a seatblet and been drinking. The poorly designed fuel tanks only become significant when the drunk driver crahse and so on.

This framework does have its limitations. The recognition of important factors can be so wide ranging as to be useless in planning if extreme scenarios, with infinitesimal probabilities of occurring are considered. On the other hand, it may not be until the event happens that it becomes clear what factors are important. The matrix will probably be of most use when similar hazardous events are being considered, as similar events would be expected to have roughly similar important factors. The matrix can also be used to identify where particular factors are not relevant. In a pile-up on a foggy motorway, for example, the detailed life history of the individual in the second car in the crash may not have any significance to their survival, it is the general physical conditions that are of over-riding significance. Equipment factors, such as airbag installation, age of car, may have an impact however. In other words, the matrix might be useful to explore the topographies of different hazards or disaster; in exploring the nature or shape of the hazard and what factors dominate that landscape and which are incidental ‘bumps’ on the terrain (please excuse the landscape metaphor, but I am a physical geographer!)
Something useful might be gained by overlaying the matrix with the Swiss cheese model of Reason outlined in an earlier blog. The matrix framework helps to identify the factors that might be important at each stage; the Swiss cheese identifies if a particular trajectory of factors lines up to produce a disaster. The matrix helps identify the possibles, the Swiss cheese, whether these possibles are important in combination. In the case of the BP oil spill, for example, the Haddon matrix could be used to identify key pre, during and post disaster factors, such as the alleged failure in safety procedures and lack of disaster planning. The trajectory arrow of the Swiss cheese model can then be used to assess if this one failure affects the next layer, if one failure or factor then lines up with another to produce the cascade of errors that result in a disater.
Some potentially useful books for assessment of hazards of injury are:
Injury Prevention in Children by David Stone (2011)
Injury Control: A Guide to Research and Program Evaluation by Rivara et al. (editors) (2009)
Injury Epidemology: Research and control strategies by Leon Robertson (2007)
HAZARDS AND RISK
Risk can be defined accurately, mathematically and scientifically using statistical analysis. Risk can be defined as the chance of a particular defined hazard or event occurring. If you now the frequency of occurrence of a particular level of flow in a river then you can work out the probability in any one year that there will be a flow of a given magnitude. Leaving aside problems of how long does the record of flows need to be to be representative, how well are extreme events represented in that record and many other factors, the key point is that, in theory, risk can be calculated from such records. Risk can be given a number: a fixed value that informs people and what they should do. Btu why should risk bother you? Risk only becomes important because you feel you might have something to loss. Risk can only be defined in relation to loss, so only within a context of fear or loss. It can be refined as a simple equation of:
RISK = [Hazard (probability) x Loss (expected)]/ Preparedness (loss mitigation)
You can see how each bit of the equation could be given a number. The hazard from scientific analysis of the geophysical nature of the hazard or rather the probability of the hazardous event. Loss can be calculated by the amount of money you would need to replace what you could lose if the event happened. Preparedness, more tricky, but maybe how much you can pay to insure against you loss by that hazardous event. But is this all risk really is?
There are other ways of looking at risk.
- Risk can be defined as being a real thing, out there and so subject to scientific and mathematical analysis and calculations that are common across experts.
- Risk can be seen as a cultural and social phenomenon created by the society we live in and so subject to change as that society changes
- Risk can be defined legally as a responsibility or a failure of expected conduct
- Risk can be defined psychologically as a set of behaviours and understandings about the world
- Risk can be defined within the humanities as an emotional phenomena and as a story or narrative
Each of these different definitions illuminates different aspects of risk and may ring true with individuals in different circumstances. When watching news reports about floods in Pakistan, for example, I am seeing risk as a story or narrative dictated by the media and its beliefs about how I expect the disaster to unfold. Never underestimate the tight constraints of such storylines in affecting how we see things. Risk of injury on a building site could be viewed through the lens of legal definitions of risk and responsbility. The reactions of individuals to flooding and flood risk could be viewed through the lens of psychology. Some people believe in the risk and insure, others don’t and save their money – are the first group risk averse, the second risk takers or is it more complicated than that?
There is another way of defining risk, either singularly or in combination.
- Real: the calculation approach as above plus objective below
- Objective: the risk is real, a thing and it is out there for us to study and quantify
- Observed: Risk we can measure given our particular view of the world (and given it is real and objective)
- Subjective: Risk is about mental states of individuals who are only human and so plagued by fear, worry, uncertainty and doubt
- Perceived: subjective estimate of risk by individual or group
I would argue that all risk is perceived and that risk tends to be defined by the judgements of people, singularly and in groups, based on their application of some knowledge or information about the uncertainty involved, where this knowledge or information is objective, observed or subjective. When we believe or precieve the risk to be generated by some real, physical phenomenon then we can meausre it and calculate risk. This does not mean others will share our view of the world as objective nor our view of risk as soemthing objective.
What this means is that the perception and belief of risk varies from individual to individual, from group to group, from place to place and even from event to event. Trying to model or generalize about the actions of individuals in the face of risk is difficult but in future blogs I hope to present some models and general ideas about how people have tackled this complicated problem of understanding how people perceive and react to risk.
- Real: the calculation approach as above plus objective below
Hazards: The Complexity Approach
Complexity is, as the name implies, neither the easiest concept to get across nor the easiest one to illustrate. Part of this fogginess is because the concept is still evolving within hazard analysis. The fixed and clear definitions of what it is and how to use it are still in their infancy and still subject to intense academic debate (although a useful discussion of the concept is given in Smith and Petley, Environmental Hazards, 2009, Routledge). Different researchers from different field converge on a particular disaster and each applies their own view and meaning of complexity to the analysis of that disaster. So what follows is a partial interpretation of complexity thinking and hazards but one that I hope will nonetheless provide a flavour of how a new concept is starting to mesh and enhance hazards analysis.
Complexity theory is a borrowed, as most geographical concepts are, this time from physics and mathematics where it evolved from a detailed equation based theory into something that even geographers could begin to understand. The central idea is that a system of components operating together produces some output. This may not sound that dramatic but it is the type of output that is a little unexpected. Traditionally, it has tended to be assumed that you can understand something better if you start to pull it apart and study each component one by one, individually. Once you have a detailed knowledge of the components then you have a detailed knowledge of the system. Simplify the system to understand it. This is a highly reductionist view of reality and how you go about studying it. To understand a car you dismantle it and study each component in great detail then put the parts back together and you understand the car. Even with my limited mechanical knowledge, I can see this will not work! Complexity is a brake on this view of simplifying reality to study it.
Complexity recognises that real systems are complicate and intricate networks of components acting together in a variety of ways. Simply studying one component or even a small group, does nothing to help us understand how the system really works. It is the interactions, the relations, which drive the system that produce the emergent behaviour that we observe and try to study. In complexity theory the bits of the system, the actual components are still vital as without them the system would not exist, but to understand the system, to grasp how it works, it is the interactions, the relations and their changes that it is vital to understand. From these interactions there does tend to emerge some predictable forms of overall system behaviour. Sometimes, however, change the relations and the output can alter in unexpected and unpredictable ways. In this view of hazards, hazards and disasters occur not necessarily because of one factor but through a combination of and the complex interactions of a number of factors.
My earlier blog on the BP oil spill and the Swiss cheese model of hazards could be seen as an illustration of complexity in action. In this model, it is the interaction between specific ‘holes’ that results in the incident occurring, without this interaction there would be no incident, no explosion, no oil spill. This model is only one by which hazards can be understood however. My earlier blog on ash cloud again focuses on interactions, this time amongst a group of actants to start to form an understanding of how the system evolved and how the hazard itself became defined.
The compelxity approach is outliend albeit briefly in Petley adn Smith's book below.
Hazards and Vulnerability
Two useful books on vulnerabiltiy are:
Measuring Vulnerabiltiy to Natural Hazards: Towards Disaster Resilient Socieites by the United Nations University (2007)