Last week I promised a response to the UBC Human Security Report. I have finally got through it, and have several responses. Since the response is mostly critical, I want to preface it by saying that I am not dismissing the report out of hand. It represents a very ambitious and original effort to tie together a lot of information that is usually available only in scattered fashion, and also to begin measuring several important things that nobody has measured systematically yet. In that sense, it is a major and meaningful piece of work. This has to be appreciated – along with the fact that hardly anybody gets anything big right the first time out. My criticisms have to do with the categories and methods they use, and also with what seems to be the absence of a framework that would let them interpret both data they have and data that are not available.
First, one big reservation: the title of the report would suggest that it is about “human security,” but every previous use I have seen of this term defines it much more broadly. I understand “human security” as an effort to shift the balance of understanding security away from the strategic concerns of governments and toward the everyday concerns of actual humans. The idea is that the security of the world derives in large measure from how secure people feel in their lives, and that secure people make the greatest contribution to peace. Think of Mary Kaldor's big question: would money and effort be better spent building armed borders around the “green zones” of the world, or contributing to the everyday security of all of those people in the “red zones” who are too often the unprotected victims of violence?
The authors argue, essentially, that they are already measuring quite a lot, and that to take into account all of the factors that might make humans feel secure would be impossible to manage (p. viii). Fair enough – but then what they have produced is not a report on “human security.” When they begin repeating hypotheses (p. 42) about how democracies are more peaceful than other states (hypotheses which are based on a long tradition of deriving definitions from a dependent variable), the claim begins to matter. Developments that took place later than the period covered by the research, like the wars in Iraq and Afghanistan, call the general hypothesis into question: interethnic and intercommunal violence were promoted, torture was reintroduced on an official level, and these things were done by states the authors would define as neither poor nor authoritarian.
The next big objection has to do with theory and methods. One of the first rules of research is that researchers have to recognise that the information that is available will not always tell them everything that they want to know. Also, information cannot think or talk, so it will not tell you what it means all by itself. This is why theoretical frameworks are necessary: they give guidelines as to how to interpret the things you know, and how to anticipate the things you do not know. No decent researcher has ever given “just the facts.”
The main claim of the report is that, although just about everyone believes otherwise, the world is a safer and more peaceful place than it has been in the last 20 to 25 years. They characterise the changes as a “radical improvement in global security” (p. 3). Surprised? The authors want you to be, they regard their report as a major challenge to the conventional wisdom. A word now about “conventional wisdom”: this is a delightful phrase which is usually deployed to suggest that what most people believe is not true, and it works because so many people regard the possibility of becoming “conventional” with contempt and dread. But there is another side to this – it might be that if a lot of people regard something as being true, that is because it is true. This is why “conventional wisdom” cannot really be challenged with a “conventional” label, but has to be challenged with persuasive evidence.
So what is their persuasive evidence? Their argument basically runs like this: 1) there are fewer wars than there used to be, 2) wars are less deadly than they used to be, and 3) there is a decline in the number of incidents of organized killing outside of war. The favored explanation the report gives for all of these is the end of the Cold War, with secondary attention given to the increased involvement of the UN in conflicts. Let's look at the three major points one by one.
On the number of wars – first of all, it is not entirely clear what is measured by counting the number (p. 17) of wars. It is not a general measure of risk. Second, the decline in the number of wars is a conclusion driven by a definition. They define war as a conflict involving two states in which there are 1000 or more “battle deaths” in a year (fewer battle deaths would make a “conflict,” and if only one state is involved it is a “civil war": nothing on how these distinctions are made or whether conflicts between nonmilitary organisations are counted). To give an idea of how misleading the number of wars is as an indicator of security, consider Rwanda in 1994: the number of battle deaths in the conflict between the Rwandan army and the Rwandan Patriotic Front does not pass the threshold of 1000 to be called a war, while the report's categories have no way of counting the at least 800,000 victims of the most intensive genocide in recorded history. The figures the report interprets as a decline in war are entirely consistent with unsettling facts that are already well known: wars are less likely to be conducted between states than inside them, they are less likely to be conducted by soldiers than by private, semilegal or illegal armed forces (or in internal conflicts, by police or semiformal “services”), and the doctrine of “force protection” makes “battle death” one of the least likely consequences of war.
So on to their second point, that wars have become less deadly than they used to be. The report demonstrates what was already known: wars have become less deadly – to soldiers. Again, the conclusions trumpet a result which is not a finding but an artifact of methodology. The data which demonstrate the effects of technologies which prevent armies from coming into contact with one another are already well known (and they do not demonstrate this persuasively, but choose a data set which consistently offers lower estimates of fatalities than other data sets [p.30]). Their observation that “non-state conflicts involved considerably fewer fatalities” (p. 21) is true only in relation to “battle deaths.” On the issue of civilian victims, the authors perform a methodological trick. First they note (correctly) that data on civilian victims have not been systematically gathered in the past. They attempt to address this by commissioning a study of civilian victims in 2002 and 2003. Then they base a global claim about the decline of victimhood on figures which show a decline in the number of civilian victims between 2002 and 2003! This would be like concluding, if it rained yesterday and is not raining today, that the long-term incidence of rain is declining. Only longitudinal data can demonstrate that a trend exists, and data comparing a small number of cases over an inconsequential stretch of time can maybe demonstrate a blip, or maybe demonstrate a theme for future research, but cannot possibly give a basis for a conclusion.
The section dealing with deaths not directly related to war (and deaths which are not “battle deaths” -- which is the largest category!) is probably the single most frustrating section of the report. There are several points where they observe that evidence is either unreliable or not collected systematically, an observation which is both true and useful and which offers a guide to future research.
But there are also several points that demonstrate the changing nature of political violence (the use of child soldiers, violence carried out by proxies rather than by military forces, the deliberate targeting of civilians, the intentional production of displaced populations, the “strategic” use of sexual violence, the production of hunger and disease), but which are not incorporated into the general analysis because their theoretical framework has no place for them. People tend to fear their governments (p. 48), but the authors have no explanation why. So while they know that in contemporary wars armed forces will “avoid major military confrontations but frequently target civilians,” and that powerful states use “high-tech weaponry against far weaker opponents who have few or no allies” (p. 34), they have no way of addressing these developments. Despite evidence that in contemporary wars “battle deaths” constitute at most 29%, and at least less than 2% of war deaths (p. 128), their model insists on using “battle deaths” as a measure of danger. These flaws in the report do not derive from the authors not knowing the facts. It is probably to their credit that the data which would negate their conclusions are right in the report. The flaws derive from their effort to describe human security in the dated and inappropriate terms of Realpolitik.
Add to this that there are several points (the number of victims of political violence, the number of displaced persons, the rate of violent crimes such as homicide and rape, the frequency of sexual violence in wars, the probability of “indirect” victimhood due to war conditions) on which the authors not only admit that there is a lack of reliable evidence, but explain the factors leading to a lack of evidence – and then go on to draw a conclusion! At one point (p. 7), they take underreporting to be evidence of a decline in human rights violations (!). A lack of evidence may be a problem to be observed, but can hardly be basis for conclusions. The desire to argue that the world is a safer place seems to have been stronger than the support for the argument. To the authors' credit, though, at several points they promise that areas on which they have insufficient information (such as “indirect” war deaths, p. 126) will be a theme of the next report.
Generally, the evidence and conclusions seem to be at odds with one another, and this seems to be the result of trying to apply an old and weak theoretical framework to new and challenging conditions. The end of the Cold War has not meant that conflicts have ended; it means that they have changed. Accounting for those changes is probably more the job of a generation than one that can be accomplished by generating a report. The UCB team deserves a lot of credit for presenting what they know and initiating the dialogue. Better to consider their contribution as the first word rather than the last.