The recession has led to 10,000 excess suicides

Suicide appears to be the ultimate individual act and yet, as Durkheim suggested, it can be driven by forces at the societal level.  Now research by Aaron Reeves, a post-doctoral researcher and adjunct lecturer in the Department of Sociology at the University of Oxford, demonstrates that the recession has led to 10,000 ‘excess’ suicides across the US and Europe.

Anti depressants appear to provide little protection. However, countries with government policies that support people to stay in work or give the hope of another job or a better future do have lower suicide rates.

Listen here to Aaron Reeves talking to Raj Persaud about this important research.

Featured photo by Sergey.

All at sea: soldiers and slackers in the writing of Geoff Dyer

This podcast is part of our Geoff Dyer series – a series of recordings from a conference dedicated to Dyer’s work held at Birkbeck, University of London. It features Dr Bianca Leggett, Teaching Fellow in British Studies at Harlaxton College and is presented by Jo Barratt.

This year marks 25 years since the publication of Geoff Dyer’s first novel, The Colour of Memory. Geoff is a multi-award winning writer who has written 4 novels and is also known for his essays. He’s been described by the New York Times as ‘one of our greatest living critics’

6968159862_3eabbd056b_oThe colour of memory series was recorded for Pod Academy at Birkbeck, University London at a conference dedicated to Dyer’s work.

This podcast is a talk given by Bianca Leggett from Harlaxton College, University of Evansville on Dyer’s latest book Another Great Day at Sea: Life Aboard the USS George H.W. Bush.

 

Photo by Chris Boland:  www.chrisboland.com

 

 

Click here for the other podcasts in the series

What colour was the 1990s?

Counting Backwards: a quarter-century of The Colour of Memory

What colour was the 1990s?

This podcast is part of our Geoff Dyer series – a series of recordings from a conference dedicated to Dyer’s work held at Birkbeck, University of London. It features Dr Morgan Daniels of Queen Mary College, University of London and is presented byJo Barratt

This year marks 25 years since the publication of Geoff Dyer’s first novel, The Colour of Memory. Geoff is a multi-award winning writer who has written 4 novels and is also known for his essays. He’s been described by the New York Times as ‘one of our greatest living critics’

6968159862_3eabbd056b_oThe colour of memory series was recorded for Pod Academy at Birkbeck, University London at a conference dedicated to Dyer’s work.

In this podcast, Morgan Daniels steps slightly away from  directly discussing the authors work to consider the fascinating proposition: “What colour was the 1990s?’

 

Photo by Chris Boland:  www.chrisboland.com

 

 

Click below for the other podcasts in the series:

What colour was the 1990s?

All at Sea: Soldiers and Slackers in the Writing of Geoff Dyer

Counting Backwards: a quarter-century of The Colour of Memory

Counting Backwards: a quarter-century of The Colour of Memory

This podcast is part of our Geoff Dyer series – a series of recordings from a conference dedicated to Dyer’s work held at Birkbeck, University of London. It features Dr Joe Brooker, Reader in Modern Literature at Birbeck and is presented by Jo Barratt.

This year marks 25 years since the publication of Geoff Dyer’s first novel, The Colour of Memory. Geoff is a multi-award winning writer who has written 4 novels and is also known for his essays. He’s been described by the New York Times as ‘one of our greatest living critics’

6968159862_3eabbd056b_oThe colour of memory series was recorded for Pod Academy at Birkbeck, University London at a conference dedicated to Dyer’s work.

In this podcast, Joe Brooker from Birkbeck University of London, looks back at the colour of memory.

 

Photo by Chris Boland:  www.chrisboland.com

 

 

 

Click below for the other podcasts in the series, produced and presented by Jo Barratt.

What colour was the 1990s?

All at Sea: Soldiers and Slackers in the Writing of Geoff Dyer

Photo: @AlexJohnWill

Big Brother is watching us…..

UK citizens are under unprecedented, and increasing, levels of surveillance.  the government has just rushed through the Data Retention and Investigatory Powers Act 2014 (DRIP) in less than three days.  Should we be concerned?  Definitely yes, says Marianne Franklin, Professor of Global Media and Politics at Goldsmiths, University of London.
This post first appeared on The Conversation.

The UK is one of the most CCTV-saturated countries in the world. Being watched and monitored is an everyday reality on British streets, allegedly increasing from one camera for every 14 people in 2008 to one for every 11 people in 2013.

In other parts of the world, the spread of CCTV cameras and the data they collect is a matter of intense public debate. Just look at Germany, where services such as Google Street View are under serious scrutiny. But in the UK, the march of electronic surveillance is greeted as the obvious solution to crime – despite plenty of evidence to the contrary.

That is in real life, on the ground. But what about online?

You’re always being watched, everywhere

just before the summer break the Coalition government (with the tacit support of the Labour Party) pushed through the Data Retention and Investigatory Powers Act 2014 (DRIP) in less than three days.

What this new legislation effectively does is legitimise the already highly questionable levels of surveillance that we have become inured to in public for use in the online environment. The difference is that while the “data” collected are not televisual images but “communications data”, they can nonetheless tell a snooper a lot about us – where we are at any point in time, who we contact and where our contacts are.

IMG_5210

DRIP does this by legalising what critics of this bill have called “a degree of surveillance of a person of interest that totalitarian regimes, infamous for the extent and depth of their surveillance, could only have dreamt of”.

False sense of emergency

The reasons behind this outcry are the powers being granted to public authorities to access, or gain access, to our communication data at home and to require off-shore service providers to hand over this information. That is worrying enough for national and international watchdogs.

The outcry was also stirred by the way the Bill was rushed through parliament just before the summer recess – under the argument that its passage was a matter of emergency – then overshadowed by coverage of the cabinet reshuffle, which fully engulfed the day’s news cycle.

But the emergency Cameron and Clegg spoke of wasn’t a cabal of suspected terrorists, or goofy Twitter users “plotting” onlineand being mistaken for the real thing.

No, the “emergency” was the need to respond to a ruling by the European Court of Justice that criticised precisely the disproportionate levels of mass online surveillance that the DRIP law allows. It pointed out that such a degree of interception and snooping violates Articles 7 and 8 of the EU’s Charter of Fundamental Rights.

Of course, there is no reason to expect the British government to listen to the European Court of Justice – or, for that matter, to the international community. After all, this government has already made clear its position on European Union membership and the ECHR, continuing to flex its diplomatic muscles by insisting on doing it “our way”.

There is no more telling example than the government’s refusal to take full responsibility for the British intelligence service’s active participation in the NSA online surveillance programs.

Blatant abuse

The British government is complicit in the undermining of our fundamental freedoms and human rights online. It has accordingly born the brunt of criticism from high-level officials, such as Human Rights High Commissioner Navi Pillay.

Undeterred, the UK media and prominent politicians (bar notable exceptions) have justified the data surveillance ambitions of the British intelligence establishment and those of other US allies under the Five Eyes program with reference to that old chestnut:national security.

DRIP, a “thoroughly confusing piece of law, highly dangerous to privacy and a blatant abuse of democratic process”, as the founder of Privacy International Simon Davies put it, has effectively confirmed that the PRISM affair was a hardly an anomaly.

Whose security?

The use of the national security argument as an excuse for riding roughshod over fundamental freedoms enshrined in law underscores that the British political establishment, which voted for this law, has lost its moral compass.

The lack of public debate in the UK also underscores that many politicians, like most of us, are just not adequately clued up about how our digital imaginations do leave traces, and that these traces deserve respect and due process under the law.

The passing of this DRIP is a cynical misuse of democratic process that has implications for all of us in our online private lives. It is a piece of legislation that undermines bona fide effortsfrom intergovernmental organisations and civil society networksto stop the steady, and now rapid erosion of our rights online.

Whatever the justification, in a world where more and more of what we do, how we think and interact, and where we live our lives is happening online, or at the intersection of the online and offline, DRIP basically provides the government with carte blanche to access our personal communications data without due cause, due process, or adequate protection of our fundamental rights.

What concerns me right now is that outside the Twittersphere and blogosphere, there is a lack of sustained public debate in the mainstream media about this legislation and its precursor last year, the Data Communications Bill or Snoopers’ Charter.

This debate needs to be had, and in public. This is not the brave new world I want to live in. The data collection, retention, and surveillance possibilities offered by information and communication technologies should not give any state authority, or private service provider for that matter, the right to do with our data as it sees fit.

We, ordinary internet users of UK and of the world, need to unite against this misguided piece of legislation – and the brazen misuse of the democratic process that allowed it in.

Monsoon rains down by 37%

With each monsoon season India waits with bated breath for forecasts from the India Meteorological Department and other international forecasting agencies. This year’s forecast suggested a weakened monsoon, and sure enough for five weeks the monsoon has failed to provide the deluge that is expected.

This post by Andrew Turner, Lecturer in Monsoon Systems at University of Reading first appeared on The Conversation website on 21 July 2014

For India, the monsoon rains typically last from June to September and contribute a whopping 80% of the annual rainfall total. Indian society is therefore finely tuned to the monsoon for its agriculture, industry and water supply for drinking and sanitation. If spread evenly over the whole country, the total rainfall during summer amounts to around 850mm. This year has seen a substantial deficit so far, currently standing at about 37% below normal and close to the large deficit in experienced in 2009, which was, like 2002 before it, a year of substantial drought, bringing reduced crop yields and hitting the country’s whole economy.

Now in mid-July, the forecast looks set to improve. The monsoons’ advance northwards across the country has been particularly slow, leading to lack of water for agriculture and prolonged heatwave conditions – in Delhi a week or so ago I experienced temperatures near 40°C due to the absence of rain. In some regions, farmers have had to plant alternative crops that require less water due to the lack of rain, and authorities have diverted irrigation to drinking water, exacerbating their problems.

Anatomy of the monsoon

The monsoons are the biggest manifestation of the effects of the annual seasonal cycle on the planet’s weather. During spring and summer, the difference between the rapid warming of the Earth’s surface and the slower warming of the nearby ocean generates a tropospheric temperature gradient – a strong gradient in air temperature from north to south of the equator, seen in South Asia most strongly over northern India and the Tibetan Plateau. This temperature gradient stretches far up into the atmosphere forming a difference in pressure, stretching from high pressure over the southern Indian Ocean to low pressure over India. The result of this pressure gradient is the seasonal winds we know as the monsoon, which carry moisture to supply the monsoon rains across Asia.

The onset of the monsoon rains typically comes at the beginning of June, with the weather front stretching from the southwest Indian state of Kerala across the ocean to cover the states in the far northeast of India. For Indian society, and especially farmers, knowing about any variation in the intensity and duration of the monsoon and when it will start is vital. The progression of the monsoon across the country normally takes around six weeks, reaching the border of India and Pakistan by around mid-July. In September, the monsoon withdraws in the opposite direction, and as a result northwest regions experience a much shorter monsoon season and consequently greater pressure on water resources.

Change is coming

So why has it been happening? While a full study won’t be carried out until after the season, it is likely that it relates to El Niño – a warming of the central-to-east Pacific Ocean along the equator that happens every few years, changing seasonal weather patterns in many parts of the world but particularly around Indian and Pacific Ocean regions.

For India, El Niño is generally associated with monsoon drought. The remote interaction with the monsoon (known as teleconnection) is caused by a disruption to the normal trade winds in the Pacific and Indian Oceans, known as the Walker Circulation after Sir Gilbert Walker, a British meteorologist in India who sought to predict when the monsoon would fail.

Rising air and enhanced rainfall meet over the warm ocean surface during El Niño, much further east than Indonesia as is usual. But what goes up must come down, and these shifts in the circulation lead to descending air over India, which reduces the strength of the monsoon. Research has also established that El Niño can delay the monsoon’s onset, shortening the duration of rains over India.

A major concern is that the monsoon will be changed by global warming. However, all the indications from our climate modelsare that the Indian monsoon will continue to supply the region with strong seasonal rainfall. In fact most suggest that greater concentrations of atmospheric carbon dioxide will bring more, rather than less, rain. So far, so good – but the monsoon’s rains are not a statistical average spread equally on each day and in each location. Model simulations also suggest that tropical rainfall will tend to be heavier when it occurs, with potentially longer dry periods between rain events. Both of these factors have important implications for water resources, including crop damage as well as increased flooding.

With El Niño conditions forecast to grow in the Pacific throughout the rest of 2014, the full impact on this summer’s monsoon will depend on if the forecast comes true and the location of where El Niño occurs. What we can’t yet say with any certainty is how El Niño’s link to and effect on the monsoon will change under warmer future climate conditions – we only know that greater extremes of variability are likely, and a more variable monsoon may be a problem.

Photo by Ragesh Ev: Monsoon ride..A family enjoying journey through water road shot from a small village Thazhathangadi near Kottayam  (CC BY-NC-SA 2.0) 

Pre-school play – the longer the better for children’s development

When are children “ready” for school? There is much debate about when the transition between play-based pre-school and the start of “formal” schooling should begin. The trend in the UK primary school curriculum over recent decades has been towards an earlier start to formal instruction, and an erosion of learning through play.

But the evidence from international comparisons and psychological research of young children’s development all points to the advantages of a later start to formal instruction, particularly in relation to literacy.

This post by David Whitebread, Senior Lecturer in Psychology & Education at University of Cambridge first appeared on The Conversation website.

Among the earliest in Europe

Children in England are admitted into reception classes in primary schools at age four; in many cases, if their birthdays are in the summer months, when they have only just turned four. This is in stark contrast to the vast majority of other European countries, many of which currently enjoy higher levels of educational achievement. In Europe, the most common school starting age is six, and even seven in some cases such as Finland.

European Commission. EURYDICE and EUROSTAT 2013. * Although education is not compulsory until six in Ireland, approx. 40% of four-year-olds and almost all five-year-olds are in publicly-funded primary schools.
Click to enlarge

From the moment children in England enter the reception class, the pressure is on for them to learn to read, write and do formal written maths. In many schools, children are identified as “behind” with reading before they would even have started school in many other countries. Now the government is introducing tests for four-year-olds soon after starting school.

There is no research evidence to support claims from government that “earlier is better”. By contrast, a considerable body of evidence clearly indicates the crucial importance of play in young children’s development, the value of an extended period of playful learning before the start of formal schooling, and the damaging consequences of starting the formal learning of literacy and numeracy too young.

Importance of play

A range of anthropological studies of children’s play in hunter-gatherer societies and other evolutionary psychology studiesof play in the young of mammals have identified play as an adaptation which evolved in early human social groups, enabling humans to become powerful learners and problem-solvers.

Some neuroscientists’ research has supported this view of play as a central mechanism in learning. One book by Sergio and Vivien Pellis reviewed many other studies to show that playful activity leads to synaptic growth, particularly in the frontal cortex – the part of the brain responsible for all the uniquely human, higher mental functions.

A range of experimental psychology studies, including my own work, have consistently demonstrated the superior learning and motivation arising from playful as opposed to instructional approaches to learning in children.

There are two crucial processes which underpin this relationship. First, playful activity has been shown to support children’s early development of representational skills, which is fundamental to language use. One 2006 study by US academics James Christie and Kathleen Roskos, reviewed evidence that a playful approach to language learning offers the most powerful support for the early development of phonological and literacy skills.

Second, through all kinds of physical, social and constructional play, such as building with blocks or making models with household junk, children develop their skills of intellectual and emotional “self-regulation”. This helps them develop awareness of their own mental processes – skills that have been clearly demonstrated to be the key predictors of educational achievement and a range of other positive life outcomes.

Longer-term impacts

Within educational research, a number of longitudinal studies have provided evidence of long-term outcomes of play-based learning. A 2002 US study by Rebecca Marcon, for example, demonstrated that by the end of their sixth year in school, children whose pre-school model had been academically-directed achieved significantly lower marks in comparison to children who had attended child-initiated, play-based pre-school programmes.

A number of other studies have specifically addressed the issue of the length of pre-school play-based experience and the age at which children begin to be formally taught the skills of literacy and numeracy. In a 2004 longitudinal study of 3,000 childrenfunded by the department of education itself, Oxford’s Kathy Sylva and colleagues showed that an extended period of high-quality, play-based pre-school education made a significant difference to academic learning and well-being through the primary school years. They found a particular advantage for children from disadvantaged backgrounds.

Studies in New Zealand comparing children who began formal literacy instruction at age five or age seven have shown that by the age of 11 there was no difference in reading ability level between the two groups. But the children who started at five developed less positive attitudes to reading, and showed poorer text comprehension than those children who had started later.

This evidence, directly addressing the consequences of the introduction of early formal schooling, combined with the evidence on the positive impact of extended playful experiences, raises important questions about the current direction of travel of early childhood education policy in England.

There is an equally substantial body of evidence concerning the worrying increase in stress and mental health problems among children in England and other countries where early childhood education is being increasingly formalised. It suggests there are strong links between these problems and a loss of playful experiences and increased achievement pressures. In the interests of children’s educational achievements and their emotional well-being, the UK government should take this evidence seriously.

Photo of child playing by theodoritsis

Budget airlines take on the transatlantic route

The north Atlantic is one of the most lucrative and highly competitive airline markets in the world. Since the late 1940s numerous airlines have attempted, with varying degrees of success, to operate profitable commercial services on routes between Europe and North America.

A number of carriers have sought, unsuccessfully, to operate these services on a low-cost or “no-frills” basis. The latest to attempt such transatlantic services is Norwegian, which launched routes earlier this month from London Gatwick to New York, Los Angeles and Fort Lauderdale in Florida.

Can they make a go of it where others, like Freddie Laker, have failed?  Are the circumstances now different enough to make budget flights across the Atlantic a realistic proposition for the budget airlines?

This post, by Lucy Budd  and Professor Stephen Ison of Loughborough University first appeared on The Conversation on 12 July 2014.

The low-cost model

As most people will be aware, low-cost airlines like Ryanair and easyJet differ from full-service operators by minimising costs and only providing what is necessary for a safe and efficient flight. This includes flying a single type of aircraft to cut the costs of purchasing, maintenance, training and operations; offering a single economy class cabin; and flying frequent short-haul services, often between cheaper and less congested secondary or regional airports.

Low-cost airlines perform fast turnarounds (often under 25 minutes), carry high passenger loads and maximise the amount of time their aircraft spend in the air. They also focus on generating ancillary revenue by charging for items such as food and drink, hold baggage and priority boarding.

But can the model be extended to long-haul? Norwegian was established in 1993 and is now the third-largest low-cost operator in Europe (behind Ryanair and easyJet), carrying more than 20 million passengers a year. In May 2013, after receiving its first long-haul fuel-efficient B787-8 Dreamliner aircraft, it began long-haul services from Oslo to New York and Bangkok. Flights from Copenhagen, Oslo and Stockholm to Fort Lauderdale followed, as did a summer only service between Bergen and New York in May.

Then came London. With one-way fares to Los Angeles, New York and Fort Lauderdale priced from £199, £149 and £179 respectively, the new routes were heralded for offering increased competition, improving consumer choice and providing more affordable flights across the north Atlantic.

Previous transatlantic ambitions

It is worth recalling those who have been here before. In September 1977 Freddie Laker’s Skytrain began flying between Gatwick and New York for £59 one-way. It was forced to cease operations just five years later due to aggressive pricing by the established airlines.

In the late 1970s Texas-based Braniff sought to operate cheap flights between the US and Gatwick, while in 1983 US low-cost operator PEOPLExpress commenced transatlantic servicesbetween New York and London. In both cases, a lack of revenue management systems combined with rising fuel costs and tactical pricing by the incumbent full-service operators on both sides of the Atlantic meant they too ultimately failed.

More recent attempts by charter operators Zoom and FlyGlobespan were unsuccessful in the 2000s. They both attempted to adopt elements of the low-cost model on transatlantic services but were thwarted by rising fuel costs, competition and softening passenger demand.

So what has changed?

There are a number of factors which could make long-haul low-cost aviation sustainable nowadays. Aircraft such as the B787-8 are more fuel efficient than their predecessors and the internet significantly reduces distribution costs.

All the same, major hurdles remain. Longer flight times mean aircraft can only perform two flights a day as opposed to six, while crew have to spend nights away from home, which adds significantly to costs. Greater volumes of hold baggage increase turnaround times and aircraft may need to depart at antisocial hours of the morning since they are operating across multiple time zones, which can complicate scheduling. This may not be possible at airports with strict night-noise curfews.

Added to these are the usual challenges for airline businesses: environmental concerns and volatile oil prices, new security threats and the shifting balance of economic power towards the Middle East, India and China.

While there are many examples of short-haul low-cost operations around the world, there are relatively few long-haul equivalents. The only other example of a transatlantic low-cost service is the daily summer-only Toronto-St John’s-Dublin flight that was inaugurated in June by the Canadian carrier WestJet.

Interestingly, Norwegian has in common with WestJet an extensive network of short and medium-haul services that it can use to feed its transatlantic operation. It also appears to be “trading up” the transatlantic operation from the traditional no-frills approach towards a more conventional charter offer, since its B787-8s are configured with 32 premium economy and 259 standard economy class seats.

Whether this will be enough to enable Norwegian to succeed is too early to say. The long-term success of budget long-haul services will depend not just on attractive prices but on an airline’s ability to price discriminate, effectively manage its revenues and yields, and develop and maintain customer loyalty in unpredictable economic and political environments. If Norwegian overcomes all of the obstacles that have thwarted past attempts, it will be a major milestone in world aviation.

Close up, Barack Obama’s counter-terrorism looks a lot like George W Bush’s

In mid-2013, Barack Obama called for a winding down of the remnants of the “Forever War”. But even while making these calls, the US has maintained that terrorism poses a “continuing and imminent threat” – depriving the notion of imminence of any meaning, says  Luca Trenta, teaching associate in the University of Nottingham’s Politics and International Relations Department.

This post first appeared on The Conversation website on 4 July 2014.

Post:

With the world focused on ISIS and Iraq, last month US Special Forces carried out a capture operation in Libya against Ahmed Abu Khattala, the suspected ringleader of the 2012 attacks in Benghazi. The US ambassador to the United Nations, Samantha Power justified the raid as an action based on America’s “inherent right to self-defence” which was aimed at preventing armed attacks.

Power’s letter relies on a confusing mix of justifications, invoking both a state of “armed conflict” and the need to prevent future attacks. Significantly, the letter suggests that the Obama administration has maintained the notion of “continuing and imminent threat” that has driven the US counter-terrorism effort since Obama’s first term.

This deceptively simple notion implies that, given that the threat is always “imminent” it is up to the decision-maker to decide when and if it is “imminent” enough. So the notion of imminence is transformed from something that has a meaning in terms of timing – when imminent means “immediate” – to something that depends on a decision-maker’s assessment and priorities, that is, a policy option.

Imminence and pre-emption

This transformation did not occur in a vacuum. Back in 2002, in the now-famous National Security Strategy, the Bush administration explicitly called for a redefinition of the temporal parameters of imminence. The shadowy nature of the threat posed by terrorists (and rogue states) required states to act pre-emptively. The strategy has correctly been interpreted as one of the key steps in preparing the ground for the 2003 war against Iraq.

More problematically, the Bush White House did not provide any clear framework as to how imminence should have been reinterpreted. The text of the strategy often confused “pre-emption” with “prevention”, and the “brush rhetoric” of the Bush years did not help in setting clear standards.

After his exit from the Bush administration, John Yoo – a former senior staffer in the attorney-general’s office – developed the administration’s approach, pushing the message that the US should have started approaching imminence as a more permissive “decisional standard” (which allows the decision maker to make the judgement). In this standard, three criteria played a key role: the probability of an attack, the window of opportunity available to the decision-maker, and the magnitude of the possible harm if nothing were done.

As a general approach, Yoo drew parallels with the problem of women who suffered domestic violence in the US: “Rather than temporal imminence, the battered woman’s defence seeks to use past conduct – particularly escalating violence – to assess the probability that future harm is likely to occur.”

Obama – a new era?

The Obama Administration came to office with the aim of reversing many of the Bush administration’s positions (and indeed overturned some of Yoo’s positions on the use of force in interrogation). But on the concept of imminence, Obama’s White House has demonstrated an understanding similar (if not broader) than that of his predecessor.

In 2011, John Brennan, at the time White House counter-terrorism adviser, suggested that both the United States and its allies were moving towards a more flexible notion of “imminence”. In a throwback to the Bush doctrine, Brennan suggested that the reason for this shift was the shadowy and continuing nature of the terrorist threat and its capabilities.

The following year, the attorney-general, Eric Holder,confirmed that imminence incorporated issues such as windows of opportunity, possible future harm and possible future disasters. Within the Obama administration, as Daniel Klaidman has reported, the state department’s chief legal adviser Harold Koh has argued that terrorism poses a “continuing and imminent” threat that requires an “elongated” notion of imminence, similar to the one adopted in domestic cases involving “battered wives”. The parallel with Yoo’s approach could not be clearer.

Known unknowns abound

In January 2013, a leaked Department of Justice White Paperconfirmed that the administration had moved from “flexibility” to an extremely “decisional” interpretation of imminence.

The paper makes several points, but three are key here. First, targeting an operational leader is lawful when “an informed, high-level official of the US government has determined that the targeted individual poses an imminent threat”. It would be difficult to find a clearer statement that imminence now depends on an official’s determination. Second, this determination includes: the existence of a “window of opportunity” the possibility of reducing collateral damage and the chance to head off future disaster. Third, in a Rumsfeldian turn, the paper states that the US government is allowed to strike when it has the opportunity, since it “may not be aware of all al-Qaida’s plots as they are developing and thus cannot be confident that none is about to occur”. Or, if you prefer to paraphrase: “the absence of evidence is not evidence of absence”.

To be sure, the administration has relied on other legal manoeuvres including a broad interpretation of the 2001 Authorisation for Use of Military Force (AUMF) to conduct its kill-or-capture operations. Still, as the secret memo released on May 24 makes clear, the expansion beyond recognition of the notion of imminence has permitted some of the more controversial operations, including the killing of US-born cleric Anwar al-Awlaki. The killing of al-Awlaki, in particular, makes clear how imminence no longer means immediate. Eighteen months passed between the July 2010 determination that al-Awlaki posed an imminent threat – and hence could be targeted – and the actual killing in September 2011.

In mid-2013, Barack Obama called for a winding down of the remnants of the “Forever War”. But even while making these calls, the US has maintained that terrorism poses a “continuing and imminent threat” – depriving the notion of imminence of any meaning. With the drums of war beating again around Iraq, and with continuous special forces operations, the Forever War seems destined to live up to its name.

The picture is of the Global War on Terrorism Medal