Sunday, February 24, 2013

Electric Cars: A Matter of Managing Spatial Scales


The UK government has recently announced that it will fund up to 75% of the costs of installing charging points for electric vehicles in garages and driveways. (http://www.bbc.co.uk/news/uk-politics-21503532). The estimated cost of installing a power point capable of charging two cars is about £10,000 with local authorities expected to contribute £2,500 towards this cost. The report says that the government estimates that it will cost between £1,000 and £1,500 to install charging points for drivers with off-street parking for power points in their garages and driveways. Rapid chargers will cost about £45,000 each. The government believes that 75% of costs is an appropriate level of incentive for individual drivers and local authorities to invest in such technology. 

This may seem like a great idea for improving everyone’s environmental quality but, as a Commons Transport Select Committee has already asked, is this the best way to use government funds. The incentive only works if people buy electric cars, can afford the additional costs of installing such power points and only if the burden on the electricity generating system allows recharging (a big surge of power demand in domestic supply overnight might not be a good idea).  Maybe installing power points maybe an answer but it is not convincing that the right question is being asked.

The question that should be asked is why aren’t people buying electric cars? Ron Adner used the EV (electric vehicle) as an example in his recent book ‘The Wide Lens’ as an example of having to undertake ecosystem style thinking about innovations and their economic development or acceptance. Reducing the issues of EVs to one key component, Adner argues that the need to buy an expensive, cumbersome, lengthy to recharge and soon obsolete battery is an important impediment to purchase. Adner uses the example of Better Place to illustrate how rethinking the ecosystem can result in a novel solution to the battery issue. Better Place envisages a system where the battery is replaced when it runs low on power through a network of battery replacement stations. The operation is a quick change over of a discharged fro a charged battery with the car driver having as much ownership over the battery as a driver does over the petrol in a petrol station (http://www.betterplace.com/). This system transfers ownership of the battery from the driver to the battery exchange company. Better Place can now deal with, problems of obsolete batteries and charging requirements in bulk with all the benefits that brings.

The scheme was launched in Israel and Denmark that the firm believed would be ideal test sites as their relatively small size meant that a network of battery replacement stations could be established at relatively low initial investment costs. Unfortunately, extension of this novel way of thinking about EVs has not been a success in the US or Australia and the company has had to move out of these countries (http://wheels.blogs.nytimes.com/2013/02/06/better-place-proponent-of-e-v-battery-swapping-pulls-out-of-u-s-and-australia/). This does not necessarily mean that the idea is wrong just that all parts of the ecosystem need to be in place before successful acceptance can be achieved. The business model relies upon all potential actors in the network or ecosystem agreeing to run the battery replacement system as each actor benefits from participation in the network. Central to this set of relationships is the involvement of major car manufacturers who sign up to making electric cars compatible with the robotic battery replacement stations. Only Renualt had agreed to this. Without this key set of actors in place, the network or ecosystem had no chance of success.

The relative success of the battery changing strategy in Israel and Denmark and its failure in the US and Australia highlight the need to think about the ecosystem approach advocated by Adner and the importance of scale issues within it. Establishing a network of battery-changing stations requires investment but without this network the concept and practice of battery-changing would not catch on. The practice is only advantageous if there is a demand and supply of electric cars that in turn depends upon the ‘solution’ of the battery issue. Use of an electric car is a very personal issue with the decision to buy or not located in the individual and their specific context. Just this simplified description crosses and defines a range of scales all of which need to be aligned to enable the network or ecosystem to work. If none of these actors across the scales in this simple network can see an advantage to themselves in taking the plunge into electric cars then there is no way the system will even develop.

The micro-scale of the individual needs to be explored and barriers to adoption of the electric car clearly stated and translated into ecosystem or network terms. Likewise, the scale of the individual firm operating a battery-changing centre needs to be understood and linked to the other actors so it is to their advantage to adopt the new technology. Car manufacturers operate at a global, macro scale but with supposed sensitivity to local contexts and are driven by economic needs at these scales. Add the complexity provided by the competitive, established network of petrol stations, cars and owners, all forming an aggressive ecosystem at all the same scales into which the electric car network is trying to meddle and you have an idea of the complex cross-scale issues that need to be addressed. Maybe the government should spend funds on trying to resolve how to manage these multiple spatial scales of actors and networks to produce an economically viable, self-sustaining ecosystem for the electric car rather than putting the responsibility on the individual car owners to respond in the way the government wish to a few incentives at a single scale.

 

 

Beijing Pollution: Continuing Highs


In January and February of this year the air quality in and north-eastern China has been reported to have deteriorated to the extent that the smog so thick that it was visible from space (http://www.nydailynews.com/news/world/china-pollution-bad-visible-space-article-1.1253838). The rise in pollution has reported caused increases in hospital admissions of respiratory problems and, according to the report above, even resulted in an official recognition of the problem, the article stating that the Chinese Ministry of Environmental Protection stating that the haze covering Chinese cities covered over 500,000 square miles. The problem has been officially identified as being caused by unregulated industries, vehicle emissions and cheap gasoline.

The identification of the ‘causes’ is interesting. The focus is upon the action of individuals with respect to vehicle emissions and the use of cheap gasoline. This implies that the cause is a matter of individual responsibility. Placing causation at the feet of individuals means that there is justification for taking action against individuals for not taking appropriate steps deemed vital to reduce pollution by authorities. The focus on unregulated industries implies that regulated industries are not contributing to he pollution level. Again responsible and fault is placed onto individuals who run firms that do not conform to state regulations. Politicians, according to the report, even closed these firms for 48 hours as well as urging individuals to stay off the road. The implication is that the pollution is an inevitable result of the outcome of ‘development’ or ‘progress’ with the individualistic bend of capitalism.  The pollution is as predictable a result of economic progress as were the dark satanic mills of nineteenth century Manchester were of progress in Britain. By implication, the more measured and responsible activities of the state have no role in producing this smog.  Despite the state setting the economic regulatory climate as well as enforcing regulations relevant to pollution production, the role of the state is regulated to a backseat in the internal pollution narrative that is emerging from these reports.

The pollution does, however, have a flip-side in the new China – the hazard is an economic opportunity for the few. Anti-pollution domes, with interiors of pollutant-free atmospheres have been jointly developed by a Shenzhen-based manufactuer of outdoor enclosures and a Calfornian-based company (UVDI) that specialises in air filtration and disinfection systems (http://wallstnews.blogspot.co.uk/p/asia-edge.html#!/p/asia-edge.html ). Combining these existing technologies it is possible to create a pollutant free environment within outdoor activities can continue. Additionally, face masks, ranging from high-tech neoprene masks to strips of cloth, have been increasingly sold to try to prevent inhalation of pollutants. Within homes air filtration units are being employed to ensure a clean supply of domestic air.  This, however, also means that there is increasingly a social, or even a class, aspect to this hazard. The emerging middle-class in China can afford to buy these new ‘must-have’ accessories to sustain urban life. The poor are left to cope using tatters of cloth across their mouths and noses to filter their domestic air.  How long before your income determines your ability to survive extreme pollution episodes?

One enterprising individual has even taken to selling cans of fresh air to hassled urbanites, for 80 cents a can. Although to be fair, Chen Guangbiao, although promoting himself says that the sale of the fresh air cans is a tactic to push ‘mayors, county chiefs and heads of big companies’ not to just pursue economic goals but also to consider the impact of their actions on the future.

 

Tuesday, February 5, 2013

Lisbon trams, local self-organization and national planning framework


A recent holiday in Portugal started out with a few days in Lisbon aboard one of the city’s key tourist attractions, the Lisbon trams. With a 24 hour ticket in hand we hopped on the Number 28, a tram that did a circuit of central Lisbon taking in the bohemian Alfama district, the city waterfront and then heading out to Belem and its famous monastery (Belem is also the alleged birthplace of the Portuguese breakfast staple, the pasta del nata – a flaky pastry custard tart, delicious with a milky coffee). We hopped onto the tram just after the morning rush hour and it was soon clear that these aging, wooden trams were still a major means of transport for the local population through the narrow and winding streets of the city. Clearances of less than two foot either side of the tram in some streets made for an interesting journey particularly when a white van blocked the tramlines for 15 minutes snaring up three trams, half a dozen cars and producing a cacophony of tram bells and car horns.
 
 
White van blcoking tram

The narrow trams are small by any standards of modern public transport. There is capacity, 20 seated and 38 standing, so 58 people in total. What was fascinating was how the key problem of getting off the tram was dealt with by this crush of people. I may be wrong and would be happy to be corrected by any Portuguese out there, but as an outsider it seemed to me that the locals had developed an effective way to resolve this problem. Anyone who has been on the London tube will realise the problem of having to fight your way to your exit in rush hour pass a compress of immobile bodies. The Portuguese, young and old, all progressively moved from one end of the tram to the other as their stop was approached. An old man took his seat at the start of his journey. A couple of stops later, he moved from his comfortable leather seat to a seat further up the tram or even happily stood in the middle of the tram as his stop neared. As he moved others, whose stop was further down the route, took his place in the vacant seats. A few stops later he had moved along the bus to the wooden well at the back of the tram ready to disembark. Everyone followed this pattern seats permitting, without any one asking or co-ordinating the action. In other words a locally derived system of self-organized behaviour had evolved to resolve a simple problem.

 

The Lisbon transport system may have provided the context but the method of ensuring that all passengers boarded and left the bus with a minimum of fuss was locally developed (unless someone knows otherwise!) Trail and error solution rapidly converged onto a context-specific but context-useful solution. Would this work on the tube? Unlikely, more people, different context, different culture, wider vehicles and so on.  The key point is that within a specific context the local actors had developed a solution to a problem imposed by that context. Taking a modern ‘bendy’ tram to Belem a day later it was clear that the system used for the small trams was still in effect but in a dissipated form. The multiple exits and the wider tram meant it was not as important for such a system to operate to ensure that people made it off the tram at their stop. 

 

So what has this observation got to do with environmental geography? The recent national planning framework will become a key instrument for trying to conserve and alter the environment. Imposition of a context limits actions and, planners often assume, limits them in a way that will produce a particular result. Current UK planning law has been developed to enable, enhance development. As noted in previous blogs, the policy document highlights and, potentially, enhances the ability of local authorities and developers to encourage housing development. This context and focus could have a dramatic impact upon the environment and its use within the UK as it appears to provide planners and developers with a key instrument to alter the environment to fit their vision. The planning document, however, also has provision for the environment and, although it appears to be of lesser importance, does provide for ‘local’ or ‘neighbourhood’ actions, albeit limited by the context of local authorities plans.

 

This provision suggests that local actors can still affect policy but these actions may not be through the expected behaviours designed for in planning policy. In other words, local self-organization may produce locally or context specific actions and responses to the new planning context that had not been planned for by the authors of the planning policy. Local actors, left to their own devices, will operate within this context and can produce a self-organized set of behaviours unpredictable by those imposing the context. These local actors may use the new context to develop alliances and behaviours that prevent the vision of the planners and developers being fulfilled. The problem is that these behaviours can not be predicted in advance – they emerge from the constraints of the context. This means that the range of local solutions to environmental can not necessarily be planned for. As long as local trail and error in responses occurs then novel local solutions to environmental issues will emerge. This will always be a problem with trying to shoehorn planning regulations to cover every context. The flexibility needed to ensure that developers and planners have scope to try to achieve their vision also potentially permits local resistance in a diverse and unpredictable myriad of forms.

 

A word of caution is needed though for anyone trying to plan away this local flexibility to enable the restricted view of the planners to prevail. If local flexibility is not permitted then it may be that a more radical and context changing solution will force itself to the fore.  If these old Lisbon streets were bulldozed away, for example, then that would massively alter the transport context. Small, locally derived organizational rules or actions can be a solution to a problem but then again so can massive disruptions and removal of the context within which these solutions were developed. But then again you can not predict the outcome of these massive changes either!

Antifragility and actor networks


Nassim Taleb’s recent book on Antifragility may have some implications for understanding actor networks. Taleb suggests a triad of system types: fragile, robust or resilient and antifragile. Fragile systems are ones that collapse under stressful events, robust or resilient systems remain relatively neutral in the face of stressful events, whilst antifragile systems positively respond, strengthening under stressful events. Within hazards analysis such a distinction might be very helpful in separating out communities that are vulnerable to hazards, those that hold their own and those that thrive in adversity.

 

Actor network theory, as outlined in an earlier blog, is a very useful method for mapping actants (human and non-human), their relationships and how these relationships operate in changing contexts.  Leaving aside the complicated and, sometimes competing, definitions and deep conceptual issues of this approach, there is much in the simple drawing of nodes and relations that could help in identifying the basis of antifragile behaviour as illustrated in the simple network below.

 

Actants in the network may try to align and co-ordinate the network of relations to produce the outcome that they desire. Supermarkets put pressure on farmers to produce vegetables for them, controlling the prices asked for vegetables, the transport available for vegetables and even finances by tying farmers into specific contracts. In other words, the supermarkets are key actants who have extended their co-ordination and alignment of the network in such a manner as to virtually control how it operates. But is this network fragile or not?

 

Questions of fragility and antifragility can be answered only when the network is stressed, only when an event causes disruption. The nature of such events will vary with the nature of the network; event characteristics that cause network disruption will always be context dependent. This means that it may not be possible beforehand to predict the fragility or otherwise of a network. It is only when under stress that parts of that network may buckle or may develop novel means of relieving or even using the stress to strengthen the network.  Likewise, events can be propagated through the network in a variety of ways, so although one seemingly similar event may point to a stress point or relationships in the network, once that stressed node is 'fixed', the next event may pick out and illuminate another, different stress point. 

 

Antifragile behaviour can result if an actant can exploit the stress within the network to ensure that their vision or goals for the network are increasingly likely after the disruptive event. This realignment or co-ordination of the network could result from taking over the function of other nodes or exploiting a relationship that enables an actant to more deeply embed the relationships it needs to achieve its ends as in the figure. A particular bad year for crops due to drought, for example, could provide an opportunity for farmers who invested in irrigation methods to dictate prices to major suppliers or to cheaply buy up the land of farmers who did not invest in irrigation. The relationships and nodes existed before the disruptive event, but the multiple impacts (or the multiple manners in which the event plays out in the network) open up a range of opportunities for antifragile actants. It is important to note that antifragility is only definable in relation to the disruptive event (or the multiple manifestations of that event). Similarly, antifragility is only noticed if actants exploit the disruption to improve or enhance their own position and power (expressed through alignment and co-ordination of the network).

 

It is through actions within the network of relationships that antifragility and fragility is expressed. It may be possible to begin to identify some general properties of actants and relationships that may enable antifragility, but it is only through the expression of these properties by actant’s actions during and after a disruptive event that such properties will be identified as important. Future blogs will begin to characterise such network based properties by exploring the response of actants to specific disruptive events.

 

Tuesday, January 15, 2013

Fragility and Urban Planning

The recent book Antifragility by Nassim Nicholas Taleb (2012) provides an interesting analysis of the concepts of ‘fragility, robustness and antifragility’. Antifragile entities tend to improve, even thrive under the stress of extreme events. Of interest in this blog entry, however, are some of the comments made concerning extreme events and the transfer of fragility and the implications of these for the ‘relaxation’ or ‘simplification' of planning (depending on your viewpoint) outlined in the ASC Report published last year and increasingly being acted upon.


Taleb’s comment about ‘the worst case scenario’ being based on what we already know is an important one. The tables on page 28 and 29 of the ASC report, as noted in a previous blog, provide estimates of the stock at risk based on modelling of flood risk that is based on extrapolating existing data. This provides information of the risk based on current information about extreme events. These events are known, but as Taleb points out that it is the events beyond these that will become the ‘new’ extreme events to be incorporated into the next set of predictive models. Planning for the worst-case scenario that we know of or can model (unless way into the extreme tails of the modelled distribution) is likely to mean that the next, new ‘extreme’ event will be of a higher magnitude than the last extreme event.

The locking-in of long-term investment into development in areas already protected by flood defences could be seen as increasing the fragility of development. As the magnitude ‘new’ extreme events emerges these will force the continued investment of funds into protecting the increasingly vulnerable developments behind these flood defences. The focusing of development in these supposedly protected areas increases the potential impact of these new extreme events, as there is increasingly more development capable of being destroyed. The ASC Report states that for every £1 spent on flood defences there is an expected reduction in long-term costs of flood damage of £8. Does this cost-benefit ratio hold once an extreme event breaches the defences? Does the concentration of such value behind a single defensive barrier mean that there is more value to lose in one single event even if the long-term average seems to be reasonable? Is the magnitude of the losses from a single event so great as to render discussions of averages irrelevant?
It could be argued that focusing development in these protected areas is a responsible approach to the uncertainty of future environmental change but the question needs to be asked for who is it a responsible approach? Who is at hazard from this development and who effectively transfers the potential fragility of the location? Simplifying the situation dramatically, it could be argue that builders and local authorities have a ready-made infrastructure to use in these areas and can quickly build new housing, so benefit from such concentration. They also have defences against flooding in place that they can point to as providing protection for these developments (and can improve over time as the cost-benefit aanalysis dictates). House buyers may have little choice but to buy where new housing is built. The moral hazard becomes theirs. Fragility is transferred from the builder and authorities to the individual, to the householder. The focus after any event is on the individual and their lack of insurance, their misunderstanding of the risk, their need to recover, the need for the community to rally together. The fragility inherent in the system is transferred to the home-owner is not even thought about as being a historically constructed thing that invovled the builders and the authorities. Is this a similar view to that taken by the banks after the banking crisis? Taleb makes the comment that Roman engineers were often made to sleep under the bridges they built so that they had some ‘skin in the game’ (another favourite Taleb phase). If the bridge failed the Roman engineers suffered for it, they had something to lose – is it an idea we should extend to development and all the actors in it?

Thursday, November 1, 2012

Mapping Hurricane Sandy

I am sure it is not news to a lot of people that the path, destruction and emergency facilities for Hurricane Sandy have been mapped.  The BBC provides a map of damage and emergecny faclitites (http://www.bbc.co.uk/news/world-us-canada-20154479). Likewise other news organisations provide similar information to readers, but these maps tend to be fairly static in providing a single set of inforamtion. Although you can zoom and change scale on the maps, they have a relatively low level of interactivity. These maps also rely on information being provided to the organisation from another source.

Integrating data sources and providing maps of informtion that is of use to a range of users is another, more difficult task. One of the key players in online mapping has been Google Interactive Maps ( http://google.org/crisismap/sandy-2012). The map for Hurriance Sandy is produced under the umbrella organsiation of Google Crisis Response (http://www.google.org/crisisresponse/) a Google project that collaborates with NGOs, government agencies and commerical organisations to provide information on things such as storm paths and emergency faciltities. Integrating these data sources to provide spaitally located information of use to responders, locals affected by the hazard, interested news readers and authorities involves a clear understanding of the requirements of each target audience and clear planning of the nature of information presentation for each of these. It is worth having a look at the interactive maps to assess for yourself whether this complex task has been achieved. In particular look at the type of information provided about particular resources, the nature of the resources mapped and the scale or level at which this information is displayed and to which part of the audience you beleive this information will be of use (why you think so is the next question).

It is also useful to compare this structured information with the less structured flow of information from Twitter (http://www.guardian.co.uk/news/datablog/2012/oct/31/twitter-sandy-flooding?INTCMP=SRCH). This analysis by Mark Graham, Adham Tamer, Ning Wang and Scott Hale looked at the use of the term 'flood' and 'flooding' in tweets in relation to Hurricane Sandy (see also blog at http://www.zerogeography.net/2012/10/data-shadows-of-hurricane.html. Although they were initially assessing if there was a difference in English language and Spanish language tweets about the storm, their analysis pointed out that tweets were not that useful at providing information aobut the storm at a spatial resolution of smaller than a county (although it isn't clear to me at least if they were mapping the location of the tweets or the contents of the tweets - in some cases the latter might provide some more detailed spatial information on flooding and its impact but would require extraction and interpretation from the tweet itself). This lower spatial resolution and the unfiltered, personal, subjective and unco-ordinated nature of this information source means that it is more difficult to quickly translate into information that is 'useable' by other audiences. Mark Graham is a researcher at the Oxford Internet Institute (http://www.oii.ox.ac.uk/) and his webpage (http://www.oii.ox.ac.uk/people/?id=165) and blog are definitely worth look for anyone interested in mapping and internet and mobile technologies.

 

Monday, October 29, 2012

Using St Paul’s Erosion Data to Predict Future Stone Decay in Central London

In a recent blog I mentioned a research project just completed on a 30-year remeasurement of stone decay on St Paul’s cathedral in central London. A second paper looks at how this data might be used to model decay into the future (http://www.sciencedirect.com/science/article/pii/S1352231012007145  - you need to have an account to get access to the full paper in Atmospheric Environment). Modelling erosion rates into the future tends to use relationships derived from erosion data of small (50x50x10mm) stone tablets exposed in different environmental conditions. Using such data and regression analysis a statistical relationship can be derived between stone loss and changing environmental conditions. These relationships are often referred to as dose-response functions.


Two equations stand out – the Lipfert and the Tidblad et al. does response functions. Using these two equations for the decades 1980-2010, they predict an erosion rate of 15 and 12 microns per year as opposed to the measured losses on St Paul’s of 49 and 35 microns per year. The ratio between the measured and the dose-response erosion rates varies from 3.33 in the decade 1980-1990 to 2.75 in the decade 2000-2010, so fairly consistent. The difference between the two measures of decay may result from differences in what they are actually measuring. The dose-response function uses small stone tablets, exposed vertically in polluted environments. The weight loss of these tablets is measured and then converted to a loss across the whole surface of the tablet. The micro-erosion meter sites measure the loss of height of a number of points across the same surface on a decadal time scale. Both measures are changes in height but derived in different ways. What is important is that both methods indicate the same patterns of change in relation to declining sulphur dioxide levels. Both measures of erosion show a decline and both show it in the same direction and, by and large, in proportion to each other. Interestingly, when the dose-response functions are used to work out erosion on the cathedral since it was built the long-term erosion rate (as measured by lead fin heights relative to the stone surface) is only 2.5 times greater than that predicted by the does-response functions. This is a similar ratio, more or less, to those indicated over the last three decades.

The St Paul’s data does not imply that dose-response functions do not work – if anything it confirms the patterns in decay they indicate – but the St Paul’s data does suggest that using these dose-response functions to model decay into the future may require a correct function, equivalent to the ratio of about 2.5-2.75 to convert the losses to those that will be found on St Paul’s Cathedral.