Counterinsurgency is sometimes referred to as armed social work; it could also be called armed social science.

Social scientists help the security forces understand the local population, its needs, interests, values, attitudes and culture. Their insights inform and structure legitimacy-building “hearts and minds” programs, government propaganda and strategic concessions.

Social scientists also map the local population, geographically and socially, identifying kinship structures and other social networks, lines of influence and authority, and points of tension and possible conflict. This information can be used to unearth the insurgent organization and its support network, target leaders for cooptation or elimination, and intervene strategically to reduce or amplify conflict.

For all of these reasons the US military has aggressively recruited social scientists to support counterinsurgency operations in the post-September 11, 2001 war on terror, starting in Iraq and Afghanistan, but predictably expanding to other theaters. These efforts have rightly produced sharp controversy within the disciplines, especially anthropology and geography.

Local police have used social scientists for much the same purposes for decades. Most notably, George Kelling’s team at Rutgers University helped police map New Jersey street gangs, their structures, social supports, economics and territory. They then advised the authorities on deployment, arrest and prosecution strategies, and on coordinating with churches and social services on gang abatement projects. In Salinas, California, the military did this work directly, loaning advisers from the Naval Postgraduate School to the Salinas Police Department. It is interesting that these programs did not generate moral crises.

The controversies in the academy have been fundamentally confused. To understand why, it is helpful to distinguish between three normative categories: general moral claims, politics and, balanced uncomfortably between the first two, professional ethics.

By general moral claims I mean considerations of good and bad, right and wrong, virtue and vice, which are thought to apply to everyone, in all circumstances, all of the time. For the purposes of this discussion it makes no difference whether these claims are Kantian, utilitarian, Aristotelian, Christian or unsystematized. It is not the content that is important, but the type of claim.

By politics I mean ideas about the way society ought to be structured, especially with regard to the distribution of power and resources, as well as attempts to enact those ideas, and simple power struggles.

By professional ethics I mean a set of obligations particular to some occupation, its activities, its institutions and its participants.

Obviously, there must be some relationship between these three spheres. Professional obligations ought not to contradict the demands of justice, for example. And efforts to see those obligations codified and enforced will largely take a political form.

Analytically, however, the distinction is real, and it is well to keep in mind what kind of principles are invoked by normative claims.

For instance, it may be that two Catholics share a moral outlook but disagree about the type of political system that best reflects that outlook. Simply insisting on the moral values will not move the conversation forward, then, since that is where they agree. If one wants to persuade the other of his point, the argument needs to take place at the level of politics—considerations of real-world effects, questions of strategy and assessments of the balance of forces.

Likewise, a pair of accountants may fundamentally disagree about politics and morality—this one a conservative Christian, that one a liberal utilitarian—and yet agree about the ethical standards of their profession. They both know it is wrong to cook the books.

The debate surrounding those social scientists who have participated in counterinsurgency has presented itself chiefly as a question of professional ethics. That is, it has invoked codes particular to disciplines like geography and anthropology to argue that it is unethical for social scientists to assist in the military’s efforts.

That argument found traction, and the American Anthropological Association, for one, formally adopted a position condemning the military’s efforts to recruit anthropologists for their Human Terrain System. Almost no anthropologists enlisted in the Human Terrain Teams, and that program was quietly decommissioned. So score one for the good guys.

Unfortunately, the arguments that produced this minor political victory are largely fallacious, and carry with them some other, bad, implications. The problem is that though “professional ethics” may be the most available arena for the debate, it is the wrong one. This approach suffers from being both too narrow and too broad.

It is too narrow precisely because the arguments are couched in terms of professional obligations. Hence, these concerns only constitute a problem for anthropologists or geographers, but not for all social scientists and not for all people.

That of course raises the question of why these special obligations would attach to those disciplines. One answer is that counterinsurgency applications of social science constitute a type of malpractice, a perversion of the profession destructive of its aims. The anthropologist David Price, for example, has argued that the military tends to flatten social theory and instrumentalize research findings. [1. Price observes, concerning the US Army’s Counterinsurgency Field Manual: “The Counterinsurgency Field Manual’s approach to anthropological theory was not selected because it ‘works’ or is intellectually cohesive: It was selected because it offers an engineering-friendly, false promise of ‘managing’ the complexities of culture as if increased sensitivities, greater knowledge, [and] panoptical legibility could be used in a linear fashion to engineer domination. It fits the military’s structural view of the world.” David Price, Weaponizing Anthropology: Social Science in Service of the Militarized State (Petrolia, CA: Counterpunch, 2011), p. 190.] That may well be true. But the argument is vulnerable to the objection that what is needed, then, is better anthropology and a smarter military—exactly what is advocated by Sarah Sewell, Dan Cox and other proponents of engagement.

The malpractice objection operates by analogy: Medical ethics are largely a protection against bad medicine. The error, however, is that military anthropology is not necessarily bad anthropology, though it may be. And, in any case, that is not the real impetus for the objections. It is a bit like trying to argue that it is bad for physicists to build bombs because it is bad physics, rather than because bombs are bad. Or that the problem with mathematicians working on NSA decryption programs is that it leads to bad math. Maybe it does, maybe it doesn’t—but isn’t that beside the point?

(As an aside, some military theorists describe counterinsurgency as “military malpractice,” with the implication that it is unethical and unprofessional for soldiers to participate in it. The problem, as they see it, is that their job is to kill people and if they spend a lot of time digging wells and drinking tea with the locals, their job becomes harder to do. That points to the problems with “malpractice” analogies and some of the weaknesses of the professional ethics framework.)

Another argument against military anthropology is that it makes all anthropology more difficult and dangerous by association, and therefore harms the discipline. This argument is somewhat analogous to the ethical demand that reporters protect confidential sources. It is only partly an obligation to the source; it is also an obligation to others in the profession, so that their sources will trust that similar commitments will be honored.

A third reason social scientists may be under special obligations is that they are exposed to greater moral risk. Anthropology and geography have particularly troubling histories of military involvement—including espionage during World War I, advising on Japanese internment during World War II, participation in civil engagement programs that were used to create assassination lists in Vietnam, and so on. To avoid similar missteps in the future, it makes sense to institutionalize additional safeguards.

These are all important considerations, whether or not they are decisive. And similar reasoning led the American Psychological Association (APA) to bar psychologists from participating in military interrogations. It was the right decision; however, it is disanalogous to the social science debates. An important aspect of psychology is what has been described as its “Hippocratic” character, which is part of what makes it, like medicine, not merely a science but a modality of care. Anthropology, sociology and geography simply do not have that same orientation. Moreover, by their very nature they cannot, and efforts to reshape the disciplines on the model of medicine could only result in an unsavory paternalism.

All of which brings us back to the problem of narrowness. The standards invoked in these arguments are specific to their particular disciplines. They do not travel, even within the social sciences, which makes one wonder how much force they can really have. If one can shed one’s professional obligations, or dodge an ethical prohibition, simply by moving down the hall, from the anthropology department to, say, criminology, police science or security studies, then surely one can do the same thing by transferring allegiance from one institution to another—for example, by leaving the university and joining the Marines.

Unfortunately, professional ethics can only go part of the way—and not the first part. The disanalogy with psychologists and torture is illustrative. The APA prohibition on certain kinds of interrogation comes in a context of a general prohibition on torture and acknowledges an additional responsibility for psychologists. That is, it is wrong for anyone to use torture, but it is especially wrong (for reasons particular to the profession) for psychologists to do so. The new restrictions apply specifically in contexts where other safeguards, like judicial review, are lacking.

The difference is that there is no generally recognized prohibition on collaborating with the US military. A case could be made, of course, based on the nature of the activity (e.g., atrocities), the nature of the conflict (e.g., wars of aggression) or the nature of the institution (whether on pacifist or anti-imperialist grounds). And it is just these concerns that animate the effort to break the ties between the academy and the military. But by making the issue one of professional ethics, the activist academics have bypassed these moral and political questions, which are logically prior.

The second problem, the one of broadness, is of a different nature. It concerns the formulation of the ethical demands themselves. In general, the standards have been articulated in terms of transparency, informed consent and not harming the interests of those one studies.

But this standard, too, does not travel. The field of criminology, for example, simply would not exist if it adopted that standard. Or, say to an economist, “first, do no harm,” and he will stare blankly for a long minute. Once he grasps the intended meaning, he would likely point out that there are winners and losers with every economic policy—or, if he is a Marxist, that there are competing, conflicting, irreconcilable class interests.

The problem of over-broadness is that the ethics being cited take a safeguard developed to protect particularly vulnerable people and generalize it to a principle of the discipline.

That puts social scientists who study powerful groups—the police, the military, stock traders—in something of an awkward position. Professional obligations may require that they protect these groups and their interests, while moral considerations and political commitments might require the opposite.

The mistake, again, is the attempt to address the issue without directly confronting the politics involved. Worse, by implication, it will tend to depoliticize the research. The demand that social science be harmless, when inherent conflicts of interest are at play, amounts to a demand that it be irrelevant. It provides the formal appearance of neutrality, but will tend to preserve the status quo. That is, if social scientists harm no one, their work will implicitly support the powerful.

The implications are twofold.

First, we social scientists should take the fight off campus. Rather than pursue these controversies intramurally, as debates within disciplines, we should treat them as the broad, crucial social questions that they are. There is more at stake than the soul of sociology. And in any case, the best way to guard the integrity of the disciplines is likely to take on the bigger questions first. We should use expertise and resources to discredit the military, condemn war and unmask imperialism—then to pursue a professional standard of non-collaboration.

Second—or, in fact, simultaneously—we should understand the danger that we pose, rather than making ourselves harmless and then imagining that we must be good because we have no claws. [2. Friedrich Nietzsche, Thus Spoke Zarathustra (trans. Walter Kaufmann) (New York: Penguin Books, 1978), p. 118.] Do not resist the “weaponization” of the disciplines; treat knowledge as a weapon and use it to fight for justice and equality. Use it to arm marginalized and powerless groups and aid them in their struggles. Rather than disarming the social sciences, we should use our arms responsibly. As with any weapon, the first rule of research should be “be careful where you point that thing.” [3. Geoffrey Boyce and Conor Cash, “Geography, Counterinsurgency and the ‘G-Bomb’: The Case of México Indígena,” in Kristian Williams, et al., eds., Life During Wartime: Resisting Counterinsurgency (Oakland, CA: AK Press, 2013), p. 254.] We should consider what we are aiming at and what effect our work is likely to have. But we should aim at having some effect. Knowledge is not neutral, and our ethics should not demand that it be. Responsibility means taking sides. As important, it means taking risks.

How to cite this article:

Kristian Williams "Armed Social Science," Middle East Report 279 (Summer 2016).

For 50 years, MERIP has published critical analysis of Middle Eastern politics, history, and social justice not available in other publications. Our articles have debunked pernicious myths, exposed the human costs of war and conflict, and highlighted the suppression of basic human rights. After many years behind a paywall, our content is now open-access and free to anyone, anywhere in the world. Your donation ensures that MERIP can continue to remain an invaluable resource for everyone.


Pin It on Pinterest

Share This