X Close

STS Observatory

Home

Menu

1/2 idea No. 16: Minimal intention historiography – Historiographical Experiment #2

By Jon Agar, on 30 July 2021

(I am sharing my possible research ideas, see my tweet here. Most of them remain only 1/2 or 1/4 ideas, so if any of them seem particularly promising or interesting let me know @jon_agar or jonathan.agar@ucl.ac.uk!)

History’s primary sources are, nearly always, created for intended purposes. Usually the topic of the historian is aligned to the intention of the author of the primary sources, even when critically interpreted. For example, when I wrote about Thatcher’s science policy I would pay close attention to what she wrote about science in her autobiography even as I critically analysed how she wrote about it and what she might have left out.

Some fields of history, primarily ones seeking to recover unprivileged voices, read against the grain. They take the primary sources written by and for the powerful and do the hard work of bringing to light the experience of the oppressed, marginalised, unpowerful.

What would a history be like that deliberately and systematically set out to minimise the influence of intentions of the authors or makers of primary sources? History written only using the parts of primary sources which are, to as minimal degree as possible, unshaped by the authorial intention of the person wielding the pen or camera?

For example, imagine a collection of photographs of city spanning a century. Most photographs have an intended subject. But they also have detail captured, accidentally, as background. If this collection was the primary source corpus for a historical study, how would the history of the city be different if only the accidental background evidence was used rather than the primary subjects? Would new subjects be recovered?

As you can see from this 1/2 idea, and others, I am intrigued by historiographical experiment. In this case an artificial constraint is imposed on historical method, and the result is compared to history written without that constraint. If it doesn’t reveal anything of interest then it can be considered a formal game. If it does – say if the unlikely event of a new historical subject, or even just an unexpected rearrangement of the usual hierarchies of historical subjects, emerges – then there is revelation.

 

1/2 idea No. 15: Traces on the Martian shore

By Jon Agar, on 30 July 2021

(I am sharing my possible research ideas, see my tweet here. Most of them remain only 1/2 or 1/4 ideas, so if any of them seem particularly promising or interesting let me know @jon_agar or jonathan.agar@ucl.ac.uk!)

 

 

 

 

1/2 idea No. 14 Extend IUCN to ecosystems

By Jon Agar, on 28 July 2021

(I am sharing my possible research ideas, see my tweet here. Most of them remain only 1/2 or 1/4 ideas, so if any of them seem particularly promising or interesting let me know @jon_agar or jonathan.agar@ucl.ac.uk!)

This project would be an extension of my interest in the history of the Red Lists. The IUCN – International Union for the Conservation of Nature – began issuing Red Data Books (in fact the first were files, and there’s a short blog piece by me here with pictures) in the 1960s. They identified the rarest organisms, and gave a qualitative rating of the threatened status. Rarity was a matter of judgement.

But as concerns over extinction increased, and, in particular, the rarity of creatures became subjects of legal challenge because of issues of land use and trade in flora and fauna, so this qualitative rating began to crumble.

In the late 1980s, conservation scientists, notably Russell Lande and the late lamented Georgina Mace, offered new quantitative criteria for measuring levels of threat. It was a fascinating and largely successful bid to move from an appeal to expert judgement to an appeal to expertly-followed procedures of measurement in order to be able to speak, with authority, about threats to extinction.

It was a case study of how science could respond to the Sixth Extinction, and the politics of objectivity. I published a short version (7000 words) as ‘What counts as threatened? Science and the sixth extinction’ in Patrick Manning, Mat Savelli (eds.), Global Transformations in the Life Sciences, 1945–1980, University of Pittsburgh Press, 2018. Stable URL: https://www.jstor.org/stable/j.ctv1nthg6.18 . I have a long version (17000 words) with expanded examples and evidence here.

The Mace-Lande criteria addressed threats to species. It was taken up, by the IUCN, within CITES, and even Wikipedia (if you look at any species page on Wikipedia you will see, in the box on the right-hand side a ‘conservation status’; that’s based on on the Mace-Lande criteria as ratified by the IUCN; for example, on the African Bush Elephant page you will see EN – Endangered).

Since the quantitative approach worked it has become the model for further projects to cement objective statements about threats to biodiversity. In the early 2010s, it was proposed that ecosystems should be red-listed.

The proposal therefore is to trace the history of this extension from species to ecosystems. How as it decided? Did it provoke a similar intense debate? How was it used or challenged?

What counts as threatened? Science, objectivity and the Sixth Extinction

By Jon Agar, on 28 July 2021

What counts as threatened? Population biology, objectivity and the sixth extinction

 

(This is the long version (17300 words) with expanded examples and evidence. A much shorter version (7000 words) was published as ‘What counts as threatened? Science and the sixth extinction’ in Patrick Manning, Mat Savelli (eds.), Global Transformations in the Life Sciences, 1945–1980, University of Pittsburgh Press, 2018. Stable URL: https://www.jstor.org/stable/j.ctv1nthg6.18 )

(If you are interested in my research on this topic, or would like to quote or use this material, please contact me: Jon Agar, Department of Science and Technology Studies, University College London, Gower Street, London, WC1E 6BT. email: jonathan.agar@ucl.ac.uk )

 

This paper traces how quantitative science was mobilised in response to one of the greatest environmental crises of the modern world: the global, human-caused mass extinction of animals and plants. Specifically, in conservation programmes, as well as in the regulation of trade, it was essential to know what organisms were extinct, threatened, merely rare, or relatively safe from threat. In the second half of the twentieth century, this knowledge was codified in the form of lists written by organisations. While national (and other scale) lists were also generated, it is the lists of international organisations that concern us here. In particular, in the 1960s the International Union for the Conservation of Nature (IUCN) began compiling and publishing its Red Data Books of threatened species, and from the 1970s the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), a treaty, began listing endangered species for which trade in products would be regulated. It is the argument of this paper that, first, in the teeth of opposition, the criteria for deciding the IUCN’s categories of threat were reformulated in the early 1990s on the basis of quantitative population biology. This reformulation was justified as being a more ‘objective’ approach. But critics disputed whether the approach was practical, necessary or indeed objective. Second, a historically contingent combination of the IUCN’s championing of the quantitative approach and Southern African countries desire to re-negotiate categorisation, led to a redefinition of CITES criteria in a similar fashion. Considered as a whole, these episodes present rich case studies in the advantages and disadvantages of appealing to quantitative science to aid international governance.

While there is an immense literature on the topic from conservation biology, geography, and social and political science, historians, including historians of science, are only beginning to explore the subject of the sixth extinction[1], the Anthropocene era mass extinction of animals and plants.[2] Furthermore the historiography of relevant expertises and disciplines, such as ecology[3], and population biology[4], has yet to address in detail the period after the 1970s. ‘Conservation biology’, which condensed as a new specialty in the mid-1980s, has yet to receive attention from professional historians of science.[5] Only as archives have been opened can historians bring their methods to bear. Another relevant set of secondary literature concerns categories. Much ink, by biologists and philosophers, has been spilt debating the question of what a “species” is or what is “biodiversity”.[6] Far less attention has been given to the question of how a “rare” or “endangered” species might be defined.[7] There’s a parallel, fascinating sociological literature on standardisation, although one not focussed on biological categories of threat.[8] As a historian, my interest here is why, among a range of possible definitions, certain people in specific places at specific times were able to define and defend specific kind standard criteria.

More broadly, conservation has often focussed on geographical areas (the nature preservation movement leading to the first nature reserves in the late nineteenth century) or specific charismatic organisms (bison, sea turtles, tigers, elephants, rhinos) that have a popular familiarity. A distinctive and important trend in the twentieth century was the attempt to estimate and even list the level of endangerment of all species. In the mid-1930s the American Committee for International Wild Life Protection ‘began an ambitious project to create an authoritative inventory of the world’s vanishing species … an important model for later official lists if endangered species’.[9] In the United States, this trend, reflected in legislation such as the Endangered Species acts (1966 and 1973), set the scene for intense controversies as relatively obscure species and subspecies (the snail darter, the northern spotted owl) conflicted with economic interests (hydroelectric power and logging, respectively). Note two consequences of this combination of general listing and legislation: first, more data was produced as authorities (such as the Fish and Wildlife Service) were obliged to keep lists and scientists investigated individual cases; second, the results could be political dynamite. Both these features can be seen as we move to the international scene.[10]

 

Old criteria

Red Data Books, ‘a register of threatened wildlife that includes definitions of degrees of threat’, were proposed by Sir Peter Scott, the wildfowl conservationist, in 1963. The IUCN published the first two (on mammals and birds) in 1966, followed by one on reptiles and amphibians (1968).[11] These extraordinarily important books are now, ironically, very rare to find in their original form. Published as ‘looseleaf, ring-bound volumes: the idea was to keep them in a permanently modern condition by adding texts (“sheets”) on newly evaluated taxa at regular intervals, and by replacing older ones with updated accounts’, only revised editions exist.[12] Nevertheless, the Red Data concept spread, as lists were successively revised and national analogues produced. Furthermore, backed with the IUCN’s status, they have become regarded as authoritative statements of species at risk. As they became more embedded in conservation practice and politics, so more was at stake on the robustness of the knowledge they contained.

The criteria used in the original Red Data Books were qualitative, requiring the judgement of conservationists on the evidence (often incomplete) about the threat of extinction. For example, the 1969 revision of the criteria used in the mammals volume had a four category, three star system:

Category 1. ENDANGERED. In immediate danger of extinction: continued survival unlikely without the implementation of special protective measures.

Category 2. RARE. Not under immediate threat of extinction, but occurring in such small numbers and/or in such a restricted or specialised habitat that it could quickly disappear. Requires careful watching.

Category 3 DEPLETED…

Category 4 INDETERMINATE. Apparently in danger, but insufficient data currently available on which to base a reliable assessment of status. Needs further study.[13]

The stars – with ‘***’ the highest, meaning ‘Critically endangered’ – provided a ranking, largely within Category 1. Pink sheets also alerted the reader to a critically endangered species. A further set of symbols indicated modifiers or extra information, such as ‘(a)’ marking a full species, and ‘P’ meaning legally protected.[14] Running these labels together produced a ‘status category’ for each organism included. The thylacine, for example, was ‘1(a)***P’, to the compiler’s best information, a critically endangered full species that had legal protection (in fact, of course, almost certainly already extinct). The rusty numbat, in contrast, was a ‘4(a)P’, a protected species about which further study was needed. Each creature included also had a unique code number.

Look carefully at Category 1. Notice that the compiler (or contact more directly familiar with the knowledge about a species or subspecies in question) would have to make two calls: first about the immediacy of the danger of extinction and, second, about survival under implementation of protection. Both are qualitative, and offer no guidance about, for example, how immediate is immediate enough to qualify? How unlikely does survival have to be to count? Likewise in Category 2 there is no rule about how small a population or how restricted or specialised a habitat had to be for a creature to be included.

For a time these vagaries did not seem to matter. The trust in the expert’s call was sufficient for the inclusion of species and subspecies to pass unchallenged, in almost all cases. The Red Data Books became, in the hands of readers, many things: useful checklists, pointers to further bibliography, symbols of catastrophic environmental damage, and sober but also inspirational reading for conservationists.[15]

However by the 1980s, resolve had grown amongst conservation biologists to reform the Red Data Book categories. An important, but initially inconclusive, gathering was a symposium on ‘The Road to Extinction’ held by the IUCN’s Species Survival Commission in Madrid in November 1984. To inform discussion, Paul Munton collated and reviewed the vast variety of categories that had been used in assessments of threat from the seventeenth to the twentieth centuries. Reading through 151 lists of threatened species, Munton found 57 different categories.[16] ‘Rarity’, he decided, was the ‘most problematic’.[17] The better ones, thought Munton, particularly those of the IUCN Red Book 1966 and US Endangered Species acts 1966 and 1973, were ‘serial’ and defined criteria in relation to extinction as reference point. Indeed he proposed a years-to-extinction measure as most suitable. In summary:

The consequent complexity of developing and using a categorization system has led to much muddled thinking in defining threatened categories. They have been confused with the threat itself; with the parameters that may be used to measure threat, such as rarity and distribution; with indicators of value of a species as reflected in current thinking; and with lack of knowledge of a species. All these aspects of status have a place in a Red Data Book, but their real relationship to a degree of threat and to the status of living organisms needs to be more clearly set out. Such a restructuring of information should be in a form that encourages clear thinking and promotes the best decisions on conservation action, using the limited resources available to conservation bodies.[18]

But what should the form be that encouraged clear thinking? There were two main questions that had to be answered: who were lists for? And should the criteria be qualitative or quantitative? Both were asked in Madrid. Sidney Holt, the veteran cetacean expert, offered a way of thinking about the first question.[19] Red Data Books and similar lists had three uses, and therefore there were three reasons to categorise, but also three distinctive users’ needs that had be reflected in the form and definitions of the criteria:

The first is to convey a perception to the lay public and for this categories need to be few and clearly defined. The second is for the drafting and, eventually, the implementation, of laws, regulations, international agreements and the like; categorization is intrinsic to most of these. The third is by professionals in this conservation field, for purposes of programme development and especially for setting “priorities”, when funds and facilities are limited- as they invariably are.[20]

So the ideal criteria, according to Holt but to others assent, would be few in number, clear, useful to regulation, capable of identifying priorities to conservation practitioners, and affordable to produce and use.

But should they contain numbers?  W.A. Fuller, summing up the Madrid symposium, thought the mood was against quantification: ‘there was a general feeling that numbers alone are of limited value in establishing categories’.[21] Several participants, including Holt and Munton, had urged ‘caution against using numbers to define categories, because publishing a number may give a false impression of precision’. However, he also noted ‘counter arguments’:

First, the entire scientific enterprise is based on quantification – counts and measurements. Second, maintenance of genetic diversity is a function of effective population size (Ne). Third, extinction itself is unambiguously defined by the number 0. As conservationists make more use of scientific principles, such as population genetics of scarcity and the theory of island biogeography, they must be willing to devote the effort and funds necessary to obtain the required numerical base.

Fuller floated a quantitative ‘early warning system’ based on the logarithm of population number, a sort of extinction Richter scale. It would be simple, quantitative (although not in a sophisticated way) and meaningful to publics, lawyers and conservationists alike. It would not be delayed by lack of detail about species, since just an informed estimate of population was required. ‘We should be able to ring the first alarm bell’, Fuller said, reminding colleagues of the urgency of the situation, ‘without waiting for “hard” data’.

Finally, it was an implicit aim that the criteria would work for all species, whether they were animal or plant, or vertebrate or invertebrate. The system has to be equally applicable to the Transcaspian urial, an Australian spring snail, and a Szechuan cycad. However, while the proceedings of the Madrid symposium were published, the movement to rewrite the definitions of threat and endangerment stalled, although the conclusions would inform the ultimately successful project discussed below. The IUCN’s attention in the 1980s mostly focussed on the consequences of the World Conservation Strategy, the international commitment to a target of sustainable trade, aligned to development, that had been agreed in 1980.[22]

 

New criteria

The project started in April 1989, when the George Rabb’s Species Survival Commission (SSC) of the International Union for the Conservation of Nature, following a meeting at Kew, invited Georgina Mace to ‘undertake the important task of preparing a concept discussion document on the Red Data Book categories with a view to their reformulation’. In the view of Ulie Seal, chair of the Captive Breeding Specialist Group and SSC member:

the status categories of species is of fundamental importance since they provide the basis for inclusion in the [Red Data Books] and serve as a globally accepted signal for action to protect the listed species…  [Yet the] formulation of these concepts took place before the current developments in small population biology and population viability analysis and are in need of careful examination and reformulation.[23]

Martin Holdgate, director-general of IUCN, ‘warmly supported this initiative’, describing it as tackling an ‘issue of…fundamental importance’, and urged that the plan be reported to the Council meeting of IUCN in June. He also insisted that ‘when the RDB categories have been revised, it will be necessary for IUCN to publish the result … as a formal policy or position statement of the organization, which will mean Council approval of the document in question’.[24]

Mace had studied evolutionary ecology of small mammals under John Maynard Smith at Sussex and with John Eisenberg, on zoo inbreeding, at the Smithsonian Institution in the 1970s.[25] In the 1980s she developed theories and models of population viability. In 1988, Mace had published a paper in Science, that had surveyed critically the use of viable population analyses to incorporate a full range of extinction factors.[26] It was perhaps this intervention that marked her out as an independent and informed candidate for undertaking the review.  From 1991 she was a Pew Scholar in Conservation and Environment at the Institute of Zoology, London, where she continued to work until moving to Imperial College in 2006. When she was invited to lead the reformulation of the IUCN’s categories by applying the concepts of population biology in 1989 she was already at the Institute of Zoology at the Zoological Society of London, working at the interface of theory and practice. ‘As one of the first generation of conservation biologists’, noted Nature in 2006, ‘Mace has not only contributed to the scientific development of the field, but actively translated science into effective policy’.

The SSC, after ‘lively discussion’, gave a clear steer to what Mace’s discussion paper should include: a summary of current categories, views about the purpose of a new IUCN category system, a review of alternatives, both in terms of systems and procedures, and a summary of preferences ‘that takes into account those factors that cannot be controlled by the IUCN, such as the political or financial implications of any system’.[27]

Mace sent a very early draft to Ulie Seal and Nate Flesness (executive director of ISIS – International Species Information System, based in Apple Valley, Minnesota), and responses to comments from this pair were included in the draft of ‘Assessing extinction threats’ of June 1989, the first to be circulated and to survive in the archives.[28] It is best seen as a combination of two literatures: it takes the criticisms of criteria expressed in Fitter and Fitter’s The Road to Extinction and offers a solution based on the quantitative, probabilistic biology of viable populations. From Holt in Road is taken the statement of the purpose of criteria: to raise public awareness, use in legislation and to assist specialists in conservation planning. Munton’s point, that systems for assessing threat and systems designed for setting priorities have been confused and should be separated, is endorsed. Therefore the Red Data Books should

simply provide an assessment of the likelihood that if current circumstances prevail the species will go extinct within a given period of time. This should be completely objective.

An ideal system, wrote Mace, would also be ‘essentially simple’, flexible in terms of the data required (there was very little data on most species and highly detailed data on only a few), applicable to populations of different sizes, have clearly defined terms (following Munton, she called “rare” ‘confusing’), have quantifiable error estimates, and incorporate a time scale. The June 1989 draft called for a two tier approach. Level I would look like the old Red Data Book categories, but, crucially, would be ‘explicitly defined in population terms’. A Level II ‘would aim to specify for each species an estimate of mean persistence time (with error margins) or a probability of survival for some specified time interval. This second level seems to have been a probabilistic version of Munton’s suggestion of a year-to-extinction number. Mace noted that it would need much more, precise data to calculate. This Level II would soon be discarded.

So it is the Level I that is most important, since it survived to influence world conservation. In the June 1989 draft it has three categories – “Extinct”, “Endangered” and “Vulnerable” – in line with CITES and the US Endangered Species legislation. An “Endangered” species was defined, probabilistically, as ‘those that are estimated to have at least a 10% chance of going extinct in the next 100 years if current conditions prevail’. But how would ‘10% chance of going extinct in 100 years’ be calculated? Here appeal was made to five different, measurable factors.[29] A “Vulnerable” species was defined similarly but with a range of 1-10%.

The draft was circulated by Simon Stuart, head of the Species Survival Programme at IUCN,  to an inner circle of twenty first commentators[30], some of IUCN secretariat and, after a small misunderstanding, Jane Thornback, director of the World Conservation Monitoring Centre, in Cambridge.[31] Some, such as the chief research scientist of Australia’s CSIRO, Graeme Caughley, were intensely sceptical about the application of population viability analysis (PVA), while others, such as Nate Flesness, supported it.[32] Hugh Quinn, curator of herpetology at Houston Zoological Gardens felt that the ‘quantitative approach … is superb’.[33] A related, and recurrent, comment was that there was ‘insufficient population data for a scientific and objective analysis’.[34] Some went further: ‘I don’t believe it is possible to come up with an assessment which is “completely objective”, wrote Jeffery McNeely, Chief Conservation Offer and leader of the IUCN’s biodiversity programme, this ‘may be an ideal worth aiming for, but we should recognise the inevitable subjective aspects of any assessment system’.[35] Simon Stuart hit back:

… status categories have no meaning whatsoever if they are not about extinction probabilities. … Georgina has done an excellent job at narrowing down the fundamental use of categories, and yet she has kept the categories in the context of data management and Red Data Books, both of which have much wider uses that go beyond the categories themselves. We desperately need an objective and yet useable means of assessing likelihood of extinction. It is incredible that IUCN and SSC have not been able to provide this up to now. It would be a tragedy if we now confuse this opportunity with another woolly system.[36]

Other critical comments were that it was not easily applicable to plants (it was written ‘from an animal point of view’)[37], and that the criteria had to be workable in relation to resources available[38], in particular that training and investment in field data collection, especially in the Third World, would be essential.[39] Brian Huntley, of the Foundation for Research Development, Pretoria, South Africa, wrote that the ‘desire to develop more objective categories and criteria’, requiring data that was expensive to produce, would lead to an ‘exercise in Eurocentric or “Northern” folly while tens of thousands of species go down the tube in the tropics’.[40] The draft may have been presented to the SSC at a Rome meeting.

 

 

By August 1989, a second draft, now with a new subtitle, was ready.[41] In September Simon Stuart sent it out for comments from SSC and SSC Specialist Group members. This network was wider and even more international than the one that had seen the first draft. The tone of the responses ranged from enthusiasm to trenchant criticism. They are worth going through in some detail, since they are a goldmine of documentary evidence on the aim to agree objective criteria. I will consider in turn positive comments, suggestions that the Red Data Book system should not be meddled with, criticisms of the quantitative approach in general and population viability analysis in particular, claims that an ‘objective’ system was practically unachievable or indeed misleading, claims for residual subjectivity, pleas for the contribution of field experts, professional and non-professional, to be recognised, worries about the distance of ‘scientific’ conservation from local engagement, criticisms of specific aspects of the proposed criteria (inclusion of subspecies, treatment of geographically separate populations, time-scale), and comments on the irreducible unpredictability of human-made change.

Michael Soulé, the leading advocate of conservation biology, called it ‘a very healthy departure from the past’.[42] J.H. Lawton, director of the Centre for Population Biology, called it, with a minor reservation about invertebrates, an ‘excellent document’.[43] For Rod East, of the IUCN’s antelope specialist group, ‘the greatest value of the new system … will be in assessing conservation priorities across taxa’.[44] Not only would this be ‘objective’ and ‘scientifically based’, it would also encourage the identification and conservation of rich areas since prioritised zones for individual species would overlap. The broad applicability of the criteria also appealed to Eric Hágsater, the IUCN’s lead orchid specialist: ‘the analysis of the categories looks very good because it discusses, on a demographic basis, criteria that had been “universalized”’.[45]

For others the value of the new system lay in contrast to what ruled elsewhere. R.B. Martin, assistant director of the Zimbabwean national parks, for example, reflected on having watched the ‘CITES convention become a farce because of a total lack of objectivity’ in applying its criteria.[46] I will return to CITES and the view from Zimbabwe below. The IUCN’s new approach, which might constrain users’ interpretations, promised to be much better. Bertrand des Clers, an IUCN ethnozoological expert based writing from Rue de Téhéran, Paris, also praised the aim of objectivity:

I would … strongly suggest that we concentrate IUCN’s judgements on sound, objective, scientifically-based knowledge, forgetting about political activism which, in the end, is counterproductive to conservation goals, in other words let the politicians play politics, let IUCN be credible.[47]

Many, perhaps a third of correspondents, supported the approach, making few comments.[48] G.R. Hughes, of the Natal Parks Board, also liked it, urging ‘every strength to [Mace’s] arm’, adding that it might solve specific problems: from ‘a purely personal point of view and certainly as far as Sea Turtles are concerned, the blanket categorization of endangered species such as the Green Turtle has been in our opinion simply stupid, and a disgrace to scientific integrity’.[49]

Others, however, warned that the system in place, however imperfect, was embedded in political life and should not be radically changed without very good cause. While not defending the ‘deficient’ existing system entirely, David Given, of the botany division of DSIR, New Zealand, suggested: ‘not only is the present system “a robust and workable system” but … its use in one form or another in many tens of countries for over a decade on a variety of biota indicates that any alternatives must be submitted to severe scrutiny’.[50] He cited that fact that there was consensus on what plants entered the New Zealand red data book as one reason to not mess with a working system. Sharon Matola, director of the Belize zoo, also argued that any revision to criteria must keep the Red Data Book system simple, clear and workable:

It is vital to remember that many decisions about wildlife laws that are made in developing countries are made by government officials, many who have weak backgrounds in biology, ecology, zoology and wildlife management. It is important that resources such as Red Data Books be as concise and succinct as possible.[51]

C.A. Spinage, writing from the national parks of Botswana, went further: ‘No system is ideal, and the present system imperfect as it is, is now well known and should be left alone’.[52] It had been a fight to get protection built into national legislation, and any change would take perhaps twenty years. Spinage recognised other political realities:

You cannot tell a government that a species is endangered and must be protected, and then later turn round and say that it is no longer endangered because we have changed the system of classification. The result would be that conservation proposals would not be taken seriously.

On the other hand, the Red Data Books had serious flaws. As Jared Diamond had pointed out many species were endangered, or even went extinct, without entering their pages.[53]

The aim for ‘objective’ criteria in general, and based on population biology in particular, drew praise but also critical comment. Brian Groombridge of the WCMC was ‘not convinced that a system which relies on PVA for decision making in the actual process of categorisation will answer the needs that are set out’, while suggesting that PVA be used effectively in a more targeted manner.[54] ‘My first reaction to this doubtless well-intentioned paper’, wrote Norman Moore, a senior conservation expert in Britain, and a sceptic of ‘crude model making based on inadequate data’, was ‘to condemn it as time wasting bogus science’ (although he did add: ‘I shall be delighted to be proved wrong!’).[55]

One specific complaint, already made against the first draft, was that PVA might be achievable for a select few high-profile species but not for others. For example, David Norton, a New Zealand forest ecology expert, wrote:

… I really wonder about the usefulness of PVA for many (most?) threatened organisms. Certainly for the better known mammals and birds it is possible to model probabilistic persistence times (eg in grizzley [sic] bears and spotted owls). But for many other groups of organisms, there is just not the data available for this approach. We should obviously be trying to get this data, but for many groups there are just not enough scientists or time.

Michael Samways, a Natal-based entomologist, noted that insects too were problematic: their ‘uni- and multivoltine life-styles, their often rapid population increase, their high fecundity and mortality, and even their small size and difficulty of identification’ all made ‘absolute estimates of population size quite difficult’.[56] M.G. Morris, head of the Furzebrook Research Station, a butterfly expert, thought that ‘spurious objectivity’ was the danger.[57] Mace had written the paper, Morris suggested, with vertebrates in mind. The criteria would not work well for insects and plants. T.R New, Australian chair of the Lepidoptera specialist group thought for invertebrates ‘P.V.A. and realistic quantitative data will inevitably remain a pipedream’.[58] Holdgate, whose endorsement, as head of IUCN, was crucial, retained ‘worries about whether we really can apply the population parameters’, but conceded that ‘some objectivity is clearly highly desirable’.[59] Even Michael Soulé suggested that formal PVAs might only be required on a “need only basis”, that is ‘when circumstances demand it and when resources … are available’.[60]

Many others expressed doubts whether the quantitative population approach was practically achievable. Stefan Gorzula, from experience of coordinating environmental impact studies of a Venezuelan hydro-electric project, wrote from Caracas that ‘in the real world of conservation most information does (and probably always will) tend to be subjective and anecdotal’, adding ‘I am personally somewhat sceptical as to whether “scientifically based assessment” is ever carried out at all’.[61] Rod East, thinking of his antelopes, supported PVA as a long term-aim, but ‘since appropriate data are lacking’, agreed too that “Judgement will continue to be an essential element in the use of evolving techniques”.[62] Sidney Holt wrote that ‘there do not exist data of the kinds you want to use’.[63] Margaret Klinowska, like Holt a cetacean expert, also did not think that the ‘kind of information [needed] … would be available.[64] It is interesting that even in the case of whales, perhaps one of the most systematically surveyed of creatures[65], this lack of data was seen as a severe problem:

There seems to me to be no practical prospect that this [adequate information] would be the case for the vast majority of cetacean species within the foreseeable future. Surveys using ships and aircraft are terribly expensive, and for a lot of whale species one would need to cover all the world oceans plus the Mediterranean Sea just to get a population estimate. Unless satellite technology can come up with resolution to a few cm plus some good way to detect the beasties through cloud cover … we are not going to be able to get the data.

Peter Pritchard, the IUCN’s tortoise and turtle expert, ‘didn’t think the categorizations will ever be completely objective, because accurate and quantitative evaluation of the species’ future response to existing threats will depend on factors in the environment that are themselves unpredictable’.[66] While appreciating the ‘attempts … at population modelling’, he remained ‘pessimistic that any of the models currently in circulation have real-world predictive capability’. Hans Frädrich, of Berlin zoo and speaking on behalf of ‘people working with live animals’ stated: ‘What we are in need of is much more a practical approach rather than a “scientific” one’.[67]

Others urged that, in particular, the system must be practicable for use in developing countries. Alan Kemp, head curator of birds at the Transvaal Museum in Pretoria, hoped for a

… relatively simple model that can be run on basic field data. Sitting at the interface of developed and undeveloped worlds, the need for a broad comparative measure that can be achieved for most species with reasonable practical field measures is more important than some detailed protocol that may be too much effort and therefore too late.[68]

Probabilities, which for scientists are a way of talking with precision, could, argued some, be counterproductive to conservation. John J. Fay, on the endangered species and habitat conservation desk at the US Fish and Wildlife Service, questioned ‘whether we will ever be able to assign categories with the precision that you envision (X % likelihood of extinction within Y years with a confidence level of Z)’, adding that the ‘predictive nature of such assessments necessarily renders them open to dispute’.[69] Don McAllister, a curator at the National Museum of Canada, amplified this point, nodding towards what we now call the manufacture of doubt: ‘developers will love indications of uncertainty’.[70]

The theoretical and technical sophistication of population biology could produce two interconnected social phenomena, a heightened, perhaps not entirely justified image of objectivity and a separation of abstracted knowledge from the field, where a more embedded expertise could be found. M.K. Ranjitsinh, an Indian government scientist, urged that the subjectivity of experts in the field be respected:

There are always under any methodology involved a certain amount of subjectivity which emanates from personal experience and application of the people who are dealing with the subject matter. But I think this is something we have to contend with in any format that you may work out …. You require certain moderation of the data received by referring it to a group of people who have personal knowledge of both area and the species in question, rather than have a method whereby data is simply fed into a computer. I for one, believe that while we should utilize the computer for analysis of data we should not prima facie and ipso facto take the computer verdict … And if you involve people for a group analysis of data and whch in my opinion is necessary, certain subjectivity will prevail.[71]

In a similar vein, John Perry, who with Ranjitsinh had served as deputy chairs of SSC under Sir Peter Scott, recalled local people in the 1960s speaking ‘scornfully of IUCN “experts” who make brief visits to an area and reach conclusions without talking [in this case] to Indonesians who have known … areas for years’.[72] He sang the praises of ‘self-trained naturalists’:

I have met many such naturalists in Latin America, Africa, and Southeast Asia. One, for example, had filled a loose-leaf notebook with field notes and excellent illustrations of birds he had studied on several hundred Indonesian islands.

Perry cited the example of Russ Mittermeier, an American primatologist, who could ‘speak local languages and [could] adapt easily to native cultures [and] tap this body of knowledge’. Perry’s point was that the urge to be objective could encourage an unhelpful distance from the context of local knowledge: ‘in SSC we felt the desire to be “scientific” at times overlooked or excluded valid information’, such as that which came from embedded research.

Grahame Webb, an Australian crocodile expert writing from a ranch in Northern Territory, spoke at length of both the need for science – ‘humanity’s best problem solving device’ – but also, for it to be practical and, echoing Perry, locally engaged:

If the IUCN’s approach to “threats of extinction” becomes too theoretical and complex, at a ground roots level, then it will only involve scientists and may well end up with one group of scientists arguing with another group of scientists – at a level of resolution completely distanced from the humble Amazonian Indian chopping down a tree, or indeed, from the basic wildlife biologist who never mastered the “language” and “symbols” of population biology.[73]

Moving on to a different kind of criticism, but one which also reflected the need for conservation science to engage with the wider world, several respondents regretted Mace’s sharp separation of assessments of levels of extinction threat (to be addressed quantitatively and, it was hoped, objectively, through population biology) from assessments of kinds of threat (some natural, some human-made, and often a hybrid of the two). ‘Perhaps’, wrote Luigi Boitani, of La Sapienza, Rome, ‘it would be useful to have a system where the quality of threat (caused or not by human activity) might be included’.[74] For Holt, the problem of incomplete data, noted above, meant that the lack of comment on kinds of threat was doubly dangerous: the ‘consequence, following your scheme, and rejecting the idea of taking explicit account of “threats” as well as the general biology of the animal, will be that nothing can be said about such species, so they will be treated in practice as NOT vulnerable and NOT threatened’.[75]

Martyn Murray, a Cambridge mammalian ecologist, concluded that ‘given that it will be impossible to obtain up-to-date quantitative data on populations’ and therefore that ‘an assessment of the risk … will usually be no more than a poorly informed guess’, then the IUCN would be better advised ‘to concentrate on the extinction threats posed by different types of human activity rather than the survival chances of individual species.[76] This might be ‘very hard work’ but the alternative – an IUCN categorisation for the world’s species, guided by Mace’s criteria – was ‘frankly impossible’.

Alternatively, wondered Murray, ‘you could consider focussing on conservation of GENOMES rather than SPECIES’. This was a suggestion that came, partly, from Murray’s thinking about preserving the genetic variety of subspecies. Groombridge, who privately thought that ‘a significant number of named subspecies exist more on paper than in nature’, also objected to their inclusion on practical grounds:

If the system is explicitly meant to apply to named subspecies and to populations … the list of threatened “species” would potentially be endless. Virtually every species would have “Endangered” populations. This is both a procedural problem for organisations  like WCMC, who could be asked to handle endless lists of trinomials, many of dubious significance, and a Public Relations probem (not seeing the wood for the trees).[77]

Robert Hoffmann of the Smithsonian Institution agreed: ‘If objectivity is desirable … then “subspecies” must be avoided’.[78]

The unit of a “geographically separate population” was also criticised. Michael Usher, biologist at the University of York, admitted concern about the category of “geographically separate population”: this allowed ‘any little island, anywhere, to claim that its species’ deserved high status.[79] Similarly, John Oates of Hunter College, CUNY, and a member of the IUCN primate specialist group, also rejected geographical separation as a criterion, since ‘increasingly large numbers of bird and mammal species are going to have many geographically separated populations’.[80]

While the unit of attention attracted these quibbles, the most common criticism of the criteria themselves were of time-scale, and here the question of silence over kinds of threat was asked again. Some went high. ‘I personally would prefer a persistence time of 1,000 years’, said Hoffmann, ‘100 years is only a few generations for species of high-individual longevity’.[81] But most went low. “Threatened” defined over decades rather than centuries was meaningful over human life-spans and, furthermore, could be predicted more robustly.  Some ‘would give even money man will be extinct in 100 years!’, wrote Lester Short, curator of birds at the American Museum of Natural History, adding ‘I would rather encourage all states – nations to save as much of their diversity of habitats and relatively unique habitats as possible, than to assign predictions that would, in decades, make our own sciences laughing-stocks’.[82] Joshua Ginsberg, writing from Hwange Wild Dog Park in Zimbabwe, but also an expert in equids (horses, zebras and kin), wondered:

So, where does 100 years fall into this? I think it is too long for legal definitions, and too long for the human mind which has not been trained to think in geological time scales. Frankly, most legislators can’t conceive of a time frame beyond the next election. If they try to, they are called “visionaries” or “failures”. To be clear to legislators, the time scale must be on the order of a human generation, say 25 years’.[83]

Brian Bell, a New Zealand ornithologist, made the same point.[84]

The term “if current conditions prevail”, also drew fire, especially over long periods, given the unpredictability of human change. ‘One problem with the concept of “if current conditions prevail”’, argued McNeely of IUCN, ‘is that we all know very well that current conditions will not prevail’; specifically given that demographers agreed that the human population would reach at least 8 billion within one hundred years, he worried ‘about making projections based on conditions that we know are unlikely to be real’.[85]

James Estes, a wildlife biologist at the Institute of Marine Sciences, UC Santa Cruz, drawing on first-hand knowledge of the consequences of the Exxon Valdez oil spill in Prince William Sound of March 1989, stressed the unpredictability of human-made catastrophes and the uncertainties that resulted.[86] David Duffy, executive officer of Intecol and an expert on the enormously concentrated breeding sites of seabirds, also noted the vulnerability of such populations ‘to statistically rare but, in practice, more common major disasters, such as oil spills or major El Niños. We could call this the Valdez factor’.[87] Sixty years ago, wrote Norman Moore, ‘who could have predicted the destruction of Indo-Chinese forests by the wartime use of herbicides not invented at the time?’.[88] Likewise, Pritchard, the turtle expert, wrote plaintively:

I don’t see any meaningful potential to quantify the chances that a species will continue to exist 100 years from now. Can you imagine an accurate prediction having been made in 1889 about any aspect of our world in 1989? With the overall acceleration of change in all aspects of our world, human and environmental, I would hate to be asked even to predict what would be happening a decade from now, let alone a century.[89]

For one expert at the Large Animal Research Group at Cambridge, the focus on population dynamics, at the exclusion of assessing kinds of threat, was a fundamental mistake:

I don’t begin to understand why a population modeller was asked to make the assessment; it’s a geographer who should have been asked. If we are to judge whether a species is in danger of extinction or not, one of the last things we need to do is model its population dynamics …

The criteria used to judge probability of extinction have to be socio-economic ones. The survival of almost no species in the wild, now or ever, depends on its population dynamics… We need to know human birth rates, rates of human population increase, childhood mortality figures, GNP, direction of change of GNP, size, nature and number of “development” programs, size, nature and number of extraction concessions, and so on. These will tell you whether a species is endangered.[90]

Population biology, for this critic, was the wrong science to turn to help get a grip on the extinction crisis.

Just as with the first draft there was criticism of the two levels approach. Usher wrote ‘I don’t like the concept of level II categories… My own feeling is that if you are talking about two levels of category, politicians who will use the data, or planners, or other unqualified people, will choose the category that best fits their needs’.[91] William Lidicker, professor of integrative biology at Berkeley, suggested deleting Level II.[92]

Finally, Robert Reece, zoological director at Kings Island, Ohio, spoke for many when he wrote:

I would propose that a “critical” … classification be added to better describe those species which have at least a 50% chance of extinction within the next 25 or 50 years. The numbers are probably left to better minds than mine, but you get my drift.[93]

We shall see that this call for a third category was indeed heard.

 

At some point in the summer of 1989, the paper gained a second author. Russell Lande, a member of the Department of Ecology and Evolution at the University of Chicago, ‘improved…the theoretical basis’.[94] Lande, in 1988, had published a ‘highly influential’ population viability analysis of the northern spotted owl, an application of quantification to perhaps the deepest US controversy of the moment.[95] The third draft, a joint paper with the same subtitle, was ready in November.[96] Lande argued that the language of ‘Level I’ and ‘Level II’ was ‘a bit confusing and could lead to serious legal problems in some cases in deciding whether a given species’ is which level. He also, after consultation, including among those ‘without a vested interest’ in the technique, argued that population viability assessment (PVA) could not be done accurately ‘even for well-studied species, e.g. game birds’. Instead, it was better to ‘simply say that we are defining the three categories based on principles from population biology’.[97] The three categories were no longer ‘Extinct’, ‘Endangered’ and ‘Vulnerable’, but instead ‘CRITICAL’, ‘ENDANGERED’ and ‘VULNERABLE’. Nevertheless, Mace’s quantitative approach survived. ‘CRITICAL’, for example, was defined as ‘50% probability of extinction within 5 years or 2 generations, whichever is longer’, while ‘ENDANGERED’  meant ‘20% probability of extinction within 20 years or 10 generations’, and ‘VULNERABLE’ was ‘10% probability of extinction within 100 years’. Each of these definitions was supplemented with measurable population biology indicators, which collectively formed ‘criteria’.[98]

Here for example, is the criteria for ‘CRITICAL’, as it appeared in the November 1989 draft (and, with minor modifications, the 1991 published paper):

CRITICAL: 50% probability of extinction within 5 years or 2 generations, whichever is the longer, or

(1) Any two of the following criteria

(a) Total population Ne < 50, corresponding to actual N<100 to 250

(b) Population fragmented:

≤2 subpopulation with Ne>25 with immigration rates <1 per generation

(c) Census data of >20% annual decline in numbers over past 2 years, or > 50% decline in the last generation, or equivalent projected declines based on demographic projections after allowing for cycles

(d) Population subject to catastrophic crashes (>50% reduction) per 5 to 10 years, or 2 to 4 generations, with subpopulations highly correlated in their fluctuations.

or (2) Observed, inferred or projected habitat alteration (i.e. degradation, loss or fragmentation) resulting in characteristics of (1).

or (3) Observed, inferred or projected commercial exploitation or ecological interactions with introduced species (predators, competitors, pathogens or parasites) resulting in characteristics of (1).

Mace and Lande’s paper was submitted to the journal Conservation Biology in February 1990, accepted for publication in October, and published in June 1991. It not only made specific technical proposals, but also stated their reason for reformulating the criteria in more detail. The prime justification, in addition to clarity for planning purposes and the availability of new population biology, was that objective science could help avoid and resolve conflict by constraining interpretation:

…the existing system is somewhat circular in nature and excessively subjective. When practiced by a few people who are experienced with its use in a variety of contexts it can be a robust and workable system, but increasingly, different groups with particular regional and taxonomic interests are using the Red Data Book format to develop local or specific publications. Although this is generally of great benefit, the interpretation and use of the present threatened species categories are now diverging widely. This leads to disputes and uncertainties over particular species that are not easily resolved and that ultimately may negatively affect species conservation.[99]

 

Embedding the criteria in international conservation governance

Nevertheless, at this stage the criteria were merely lines in a peer-reviewed journal. There was considerable distance to travel before they would shape conservation practice world-wide. This journey followed two criss-crossing tracks. In the first, between 1991 and 1994 there was a process of discussion, including close examination in specialist workshops, testing across taxa, redrafting of criteria, and eventually international agreement and IUCN-endorsed publication. These rounds of negotiation enrolled conservation scientists, including critics, familiarising them with the criteria and reassuring them that the quantitative approach was practicable. The second track generated more controversy. In 1992, IUCN was invited to submit a report to the CITES Secretariat, the body responsible for governance of the treaty on trade in endangered species. The IUCN report, submitted in January 1993, drew on the criteria offered my Mace and Lande, as they were being revised in the first track. The CITES Standing Committee agreed that the report met the terms of reference it had issued, essentially endorsing the quantitative approach embodied in the criteria. Widely leaked, the document was immediately criticised by wildlife NGOs, who, noted Simon Stuart, ‘reacted very strongly and negatively’.[100] The NGOs quickly formed a new coalition, called the Species Survival Network, to oppose the approach. I will now trace these two tracks in more detail, discussing CITES first then the IUCN Red List.

 

CITES criteria

CITES – the Convention on International Trade in Endangered Species of Wild Fauna and Flora – was agreed in 1973 and came into force in 1975. It was designed to protect species endangered by commercial exploitation. Appendices are the business end of the CITES treaty; listed in Appendix I should be “all species threatened with extinction which are or may be affected by trade”, while Appendix II were those species which may be threatened with extinction if trade was not regulated, and an Appendix III addressed national populations. CITES has proved problematic, being accused of not succeeding in stopping unsustainable trade in rare creatures. In particular, three specific problems had been clearly identified by the late 1980s. First, the original ‘Berne’ criteria (1976), for deciding which species should receive what protection, were attacked as vague. Second, the granting and monitoring of export permits were the responsibility of a state’s “Scientific Authority”, and many of these were weak, under-resourced or ineffective. Third, many species had been placed at the most at risk category – listed under ‘Appendix I’ – without adequate evidence. Even though a species might receive more protection under the second rank, ‘Appendix II’ category, for both technical and political reasons it was almost impossible to move a species from I to II.

The campaign for reform crystallised around Southern African countries’ wish to remove the African Elephant from Appendix I to Appendix II. South Africa, Zimbabwe, Botswana, Malawi, Namibia and Zambia claimed that their effective management of elephant populations meant that elephant products – possibly, and most controversially, including ivory – could be traded and the profits put into wildlife conservation.[101] It was the black African countries, led by Zimbabwe, that took the lead calling for two specific CITES reforms: a recognition that carefully managed trade could benefit conservation (so long as there was introduced agreed means of moving organisms between appendices), and replacement of the Berne criteria. In the Mace-Lande criteria there was a model to hand of what the new criteria might look like. Population biology, therefore, perhaps contingently, was a strategically useful tool and ally for the campaigners.

The CITES Conference of the Parties was due to be held in Kyoto in March 1992. In the IUCN’s view, the Kyoto meeting had to endorse a move to a more flexible system, allowing easier transfer of species, but done on the basis of ‘new, more objective and more soundly based’ ‘Kyoto Criteria’ that would replace the failed Berne Criteria.[102] The CITES Standing Committee, at its meeting in June 1992, requested IUCN’s assistance in developing new criteria for listing species, specifically ‘to provide simple, pragmatic, scientific and objective criteria to determine in which appendix, if any, it would be appropriate to list species’. In fact, Simon Stuart had already prepared the ground and had a plan in place. In 1991, Southern African countries drafted Kyoto criteria based on the Mace-Lande approach. However, caution was needed. The CITES criteria operated in a system fraught with political and economic tensions, inevitably since controversial trade was the object of regulation, and had to satisfy more parties and interests even than the IUCN’s Red Book criteria. In December, Stuart sounded out senior population biologists, such as Robert May and Robert Lacy, asking them three questions: first, were the proposed Kyoto criteria, based on Mace-Lande, ‘scientifically accurate and robust’ as they stood? Second would the criteria need further development first? Third was it ‘realistic to expect governments to be able to use the more technical criteria outlined in this draft resolution’? As John G, Robinson, director of the New York-based Wildlife Conservation International noted:

Mace-Lande is increasingly being linked to a number of issues (Species endangerment, CITES, captive breeding, sustainable use, etc), and increasingly are serving as a lightning rod for the argument [as to] … whether objective criteria are desirable in making management/conservation decisions.[103]

In the summer of 1992, Stuart refined and redefined the questions and allocated them as tasks to consultants.[104] Workshops on ‘Categories of Threat’, to consider in tandem the IUCN criteria and the recommendations to CITES, were held in London in November 1992.[105] Stuart drafted the document, which he thought ‘represents as closely as possible a consensus between everyone who was present’ and circulated it for comment in December 1992.[106]

The core question was: what should “threatened with extinction”, the term in the CITES treaty, mean? IUCN answered that it was when a species satisfied one or more of the following criteria:

(1) The species occupies a narrow geographic range (typically <1,000 sq.km) and/or occupies restricted habitats within a broad geographic range

and

all subpopulations could be at risk of simultaneous extinction from catastrophes or human impacts.

(2) The species has an extremely small total population size (e.g. <250 mature individuals).

(3) The species has a very small total population size (e.g. <2,500 mature individuals)

and

either  a) is fragmented with few large subpopulations (e.g. fewer than five subpopulations numbering more than 500 individuals (of all age classes) each)

or         b) has a low recruitment rate (the average recruitment rate is equal to or less than the average normal mortality rate over the last five years).

(4) The species’ population is in persistent and continuing steep decline (>5% average decline in numbers over the past five years, or >10% decline in numbers over the past two generations) that transcends normal population fluctuations as a result if causes that are either not known or are not adequately controlled

or

a similar decline is inferred from exploitation, from habitat loss or degradation, or range reduction, or from the effects of predators, diseases, parasites and competition.

(5) A quantitative analysis indicates a greater than 20% probability of extinction over 20 years, or over 5 generations, whichever is the longer.[107]

As expected, the IUCN document was leaked as soon as it was circulated. The response from NGOs and other organisations was immediate and often fierce. I have seen documents of seventy-nine organisations (or groups of organisations) sending comments to the CITES Management Authority. While these were nationally diverse in origin – from the Nepal Natural History Museum to the Namibian Cheetah Conservation Fund – nearly half were US based. However the most organised counter-strike came from the “Species Survival Network” (not to be confused with the IUCN’s Species Survival Commission[108]), a coalition of animal welfare, environmental and conservation NGOs that was formed ad hoc specifically to challenge the IUCN’s proposals.

Originally called the NGO Working Group on New Listing Criteria, the Species Survival Network was coordinated by the International Wildlife Coalition, based in Toronto, and members included Greenpeace, both the American and Royal societies for Prevention of Cruelty to Animals, specialist lobby groups for tigers, whales and elephants, among 35 members. SSN sent a punchy critique to the CITES Standing Committee in March 1993 calling for the IUCN proposals to be rejected. Many reasons were given, but three were central. First, the SSN interpreted the IUCN proposals as an abandonment of the precautionary principle, the rule-of-thumb that in conditions of uncertainty an action should not be taken if it might lead to more harm than good. The SSN argued that the principle was ‘central to the operation of CITES and should be retained’.[109] In support of this line, the SSN even quoted Mace and Lande, picking a line from the Conservation Biology paper that seemed to endorse the principle (it doesn’t – it is consistent with the principle but does not endorse its general application).[110] Second, the SSN urged caution about ‘removing protection from a species on the basis of a claim that it is being used sustainably’.

Third, some of the fiercest criticism was directed at the proposal to base CITES criteria on quantitative population biology, that is to say in a way analogous to the new IUCN model:

IUCN’s approach is based on the “Mace-Lande Criteria” developed by Georgina Mace and Russell Lande. However, this paper was never intended by its authors to be used in determining CITES protection requirements. Using the Mace-Lande criteria, or any other quantitative model, provides no guarantee of objectivity. It may be impossible to predict the probability of a species’ extinction, except in the most imminent of cases, when the factors affecting it are not biological but dependent on markets, prices, law enforcement and political stability.[111]

Furthermore, against IUCN’s description of its criteria as “simple, pragmatic, objective and non-discriminatory”, SSN said there were ‘unnecessarily complex, almost impossible to fulfil in practice, arbitrary, biased against the listing of species, and highly discriminatory against any Party – particularly one with limited resources – that seeks CITES protection for its wildlife’.[112] What is more, the criteria, in SSN’s view, were technically in violation of the CITES treaty, being more ‘stringent than the language of the treaty allows’, and, in the case of Appendix II, ‘requiring species … to meet rigid biological criteria when none are required’ under the treaty. Echoing a response found among the comments to the draft of the Mace-Lande paper, SSN also made a point that connected the issue of resources to the question of who should have an authoritative voice on extinction: by ‘setting quantitative requirements for the preparation of listing proposals that only the wealthiest Parties can meet, while eliminating from consideration the qualitative judgements of wildlife officers in the field’, the IUCN infringed the ‘sovereign rights of the Parties’.[113]

The SSN’s specific complaints about the criteria are interesting too.[114] A species that had just failed to satisfy the quantitative measures of population proposed by IUCN for CITES Appendix I, perhaps stabilised because it had been listed, would have to be ‘downlisted even if the result would subject it to further endangerment’.[115] (In such as case, the SSN noted, the precautionary principle, if retained, would prevent this move.) As an example of arbitrariness entering under the cover of objectivity, the SSN pointed to the population size: ’35,000 or 250,000 would be just as “objective” as a limit of 2,500. The only difference is that the smaller the number, the more restrictive the requirements’. The choice of the lower level was ‘arbitrary’ and dangerous. ‘We doubt’, wrote the SSN authors, ‘that, under these criteria, the black rhinoceros (for example) would have qualified … in the 1960s’. Finally PVA, proposed for Appendix II, received another battering:

Restricting quantitative analysis to a single method, the Population Viability Assessment (PVA), is not acceptable. PVAs cost thousands of dollars to produce, placing them beyond the reach of all but the richest states. Requiring a PVA takes power away from less wealthy countries and, in effect, gives it to bodies like the IUCN with the ability to make such assessments. Countries should be able to provide proposals based on their reasonable abilities to collect data.[116]

I think it is fair to see here not only a concern that less wealthy countries might be disadvantaged, but also, more self-interestedly, a fear that less wealthy NGOs might be unable to speak with authority in an international conservation system founded on expensive science.

There is not space here to analyse in detail the comments of other organisations. Some echoed the SSN’s complaints (indeed member organisations often did so explicitly, while adding their own concerns), some opposed them (one alerted to the SSN’s campaign called it ‘hyperbole’[117]), some made new criticisms, while others offered qualified support for aspects of the IUCN’s approach, even, in the case of the Wilderness Society, for the usefulness of PVA.[118] There were some strange bedfellows. The National Rifle Association was particularly positive:

The NRA agrees with the concerns … about the use of the Berne Criteria for it allows the emotional, inconsistent listing and delisting of species. We believe it is important to begin the process of developing true criteria that will be based upon scientific data, rather than political correctness. The NRA supports the IUCN recommendations as a vital step in that process.[119]

Commercial interests also weighed in. Richard E. Gutting, jr, the appropriately named vice president of the National Fisheries Institute, Inc. (‘which represents over 1,000 US companies, engaged in all aspects of the fish and seafood industry’), supported ‘efforts to develop more scientifically-based and objective criteria’ but not ones that might lead to more ocean fish being listed.[120]

While the IUCN backed population-based criteria it is clear that there were alternatives, which also had claims to being objective. At some time in 1993 or 1994 a workshop, funded by IUCN and the Japanese government through the CITES secretariat, was held to assess the choices.[121] These were: population-based (ie Mace-Lande), distribution-based (one was offered by David Given, another called ‘MASS’, and another with a ‘similar underlying rationale’ was proposed by the Nature Conservancy), or management-based (Justin Cooke). The questions to which robust, quantifiable answers might be expected were, respectively, how many, where, or how managed? I have not found a record of discussion of which of these, perhaps in combination, might be the best approach.[122] However, the IUCN position on CITES, as it moved forward, was firmly population- and population-biology-based.

The IUCN’s proposed criteria moved through CITES standing committee in March 1994, with minor modifications.[123] The crucial event was the 9th meeting of the CITES Conference of the Parties, to be held at Fort Lauderdale, United States in November 1994. The IUCN prepared a strong set of position papers, outlining and defending their case. Overall the aim was reform CITES’ approach so that any trade in listed species was done in a manner that was demonstrably sustainable.[124] The new system, said IUCN, must discourage ‘violent swings in policy from over-exploitation to trade bans’ since they were ‘generally counter-productive to the development of longer-term, more rational conservation programmes’. Furthermore, developing countries, by benefiting from sustainable trade, would have an ‘incentive’ to secure the ‘continued existence’ or ‘economically valuable species’. In summary

CITES should be used as a positive conservation tool to promote responsible forms of management for the species, population or country in question. Trade bans should not be seen as the desired objective. The recovery of populations of economically and socially valuable endangered species should be promoted to the point where they can be harvested sustainably for the benefit of conservation.

But essential to a move towards this more ‘rational’ system, in the IUCN’s eyes, was the acceptance of the ‘extremely important draft resolution which would radically revise the criteria for listing species on the Appendices, … [this was] perhaps the most fundamental issue that will be discussed by the Parties at this meeting’. IUCN, WWF and TRAFFIC (the joint monitoring network overseen by IUCN and WWF), combined to ‘emphasise the importance of … quantitative guidelines in the criteria, so that they may be used objectively’.[125] In other words, population biology was key.

Interestingly, the IUCN and WWF noted two of the ‘difficult balances’ that had needed to be struck: ‘objectivity’ vs. ‘flexibility’ and ‘practicality’ vs ‘scientific rigour’. On the first of these:

… if objectivity is taken too far, the process could become purely mechanistic, and leave to room for human interpretation of some very complex issues. Species have such a wide range of life-forms that some degree of flexibility is required. The criteria achieve flexibility by providing a choice of options, some of which are more appropriate for some species than others. Flexibility is important to ensure that the species that really need to be listed are listed. However, if taken too far, flexibility is dangerous, since it can be used by vested interest groups to prevent the listing of species on political and economic, rather than conservation, grounds. We believe that the balance between flexibility and objectivity is about right in the new criteria.

Objectivity, itself, it seems, could be flexible! On the second:

The scientific basis of the criteria is sound. They have been developed by a number of top-level scientists from around the world. However, the criteria have also been designed to be practically usable by developing and developed countries, as well as being scientifically valid … Adding more criteria for the sake of scientific purity will make the system more difficult to use, and is unlikely to improve the overall results in terms of which species get listed. We believe that the difficult balance between scientific rigour and practicality has been successfully struck.

This language is interesting because it is shared by an intervention by the United States in the run up to Fort Lauderdale, presumably before the IUCN and WWF submitted their final position statements. Rather than appeal to numbers, the United States wanted to appeal to informed expert judgement:

Objectivity does not imply that threshold values of abundance, areas inhabited, and other relevant factors must be quantified in all cases. Objectivity in listing means identifying which factors are important in conserving threatened and endangered species and using the best available scientific and trade information on identified criteria developed for each taxonomic group. Using this approach, expert scientific opinion can be sought to provide recommendations about species proposed for listing.[126]

Each listing might be best judged by taxon specialists. (The danger, it seems to me, is that specialists can be captured by patronage regimes, as in the case of whaling.) The United States even quoted Soulé, doyen of conservation biology, to back this argument: “Communities and ecosystems are like individuals – no two are alike”.[127]

Likewise ‘flexibility’, said the United States was necessary, for several reasons: because of the diversity of life, the fact that the ‘world is changing in large-scale ways (for example, habitat destruction and global change) that we cannot fully predict, and to allow ‘a listing before definitive studies are done’ when declines were suspected. The precautionary principle was even invoked. Therefore, returning to the place of quantification:

Although on the surface it seems enticing to accept numerical thresholds as part of the listing criteria, it is impossible to find threshold values for abundance, probability of extinction, and trends which are equivalent for all cases …

Objectivity and flexibility in listing criteria are not mutually exclusive. In fact, using a single numerical value for each listing criterion to be applied across all species is neither objective nor flexible.

Alternative biological criteria were offered. The United States’ proposal spoke of ‘measurement’ but included no threshold numbers. The significance of the measurement would then be in the eye of the beholder. Here was a fundamental challenge to the IUCN’s approach. The careful language of the IUCN and WWF position statement, identifying ‘objectivity’ and ‘flexibility’ in the tabled criteria, seems to be not a compromise but a rebuttal of this dangerous line.

The IUCN had commissioned tests of the draft CITES criteria against a range of species. The results allowed the IUCN and WWF to offer reassurance that ‘”Flagship” species of particular interest to many conservationists such as the chimpanzee, cheetah, tiger, the Asian and African elephants, all the rhinos, the blue, humpback and all the other threatened great whales, the marine turtles, and the Asian slipper orchids satisfy the criteria for Appendix I’.[128] (Some, nevertheless, would be removed, such as the peregrine and the leopard.)

At Fort Lauderdale, much of the first week was spent discussing species-specific proposals made by different countries. On the 15th November, at the end of the tenth session, the ‘new criteria for amendment of Appendices I and II’’ were introduced. The eleventh session proceeded speedily: the delegation from Switzerland (home of IUCN’s headquarters) ‘agreed with the suggested changes and, recognizing that the criteria would never be perfect but that they should be tested, called for a vote to adopt the draft resolution’.[129] Zimbabwe (as we have seen, an originator of the process) seconded. The vote was 81 in favour, and none against (119 parties were present, so some abstained). The approved document, Com 9.17, was a compromise: the main text of the criteria had no numbers but had to be read in context of an annex which did. However, the numbers were described each as a ‘guideline (not a threshold)’.[130] This was a partial success for the population biologists on the bruising battlefield of international trade and conservation.

 

IUCN criteria

There were fewer interests at play in negotiating the IUCN’s own criteria. The Categories of Threat workshops of November 1992 led to the eventual publication of successive versions of criteria for the listing of species in the IUCN Red List.[131] At the workshops, 33 participants agreed upon using the categories ‘Critical’, ‘Endangered’, ‘Vulnerable’ and ‘Susceptible’, and divided into working groups exploring the consequences for specific criteria for the major taxonomic groupings: plants, invertebrates, lower and higher vertebrates. In discussion, the lower vertebrate and invertebrate criteria converged to such an extent that the workshop was able to propose aiming for three sets of criteria, one for higher vertebrates, one for plants, and one for lower vertebrates and invertebrates combined.[132] There followed a phase of drafting, complete by January 1993. In the process, the three criteria were again consolidated into a single set: even though there were ‘inconsistencies in the criteria applied across the major taxonomic groups’, these were ‘hard to minimise’, and it was ‘felt that the system would be simpler, with fewer potential contradictions, if the criteria could be consolidated into a single list, even if this did make the list longer and more complex’.[133] Attention was also paid to making the criteria workable in conditions of poor data. Quantitative definitions were given to ‘Critical’, ‘Endangered’ and ‘Vulnerable’, while non-quantitative descriptions were given of ‘Extinct’, ‘Extinct in the Wild’, ‘Susceptible’, ‘Safe/Low Risk’, ‘Insufficiently known’, and ‘Not evaluated’. What is essential to note is that the system was to be universal: every taxon had a place, and for those allocated to the key criteria, quantified. The draft criteria (later dubbed ‘Version 2.0’) were published in the IUCN’s Species Survival Commission’s own journal Species in May 1993[134], and comments invited by 30th June 1993.

The comments received fill a thick, ring-bound file, perhaps 200 pages in total.[135] They are another extraordinary international sampling of informed and interested views about the pros and cons of placing quantitative science at the heart of global wildlife conservation. The largely contingent hitching of IUCN and CITES categories had ‘stirred up a hornet’s nest’.[136] In the United States, for example, the Secretary of the Interior had asked the head of the largest environmental organisation in the country (Jay Hair of the National Wildlife Federation’)  ’why IUCN was pushing a system that would end up with many fewer species protected by CITES or considered threatened by the IUCN’. Spurred by controversy, many conservationists felt compelled to state explicitly their views about the aim for objectivity.

The call for comments took place alongside a concerted effort to test the criteria. Fifty-nine SSC members, specialists on their mammals, birds, reptiles, plants, fish and invertebrates, compared the results of applying the new IUCN criteria against old Red Data Book (1990) classifications.[137] Over 500 species were reviewed. Several local experts were disturbed when creatures, from raft spiders to numbats, they regarded as in need of conservation emerged with downgraded categories. [eg raft spider] Andrew Burbidge of the Western Australia Threatened Species and Communities Unit, for example, was ‘abundantly clear that a strict application of [the criteria] will result in several Australian marsupials that are listed in the Action Plan as Endangered or Vulnerable, being listed as Safe/Low Risk (when any conservation biologist can tell you that they are threatened)’.[138]

Specialists struggled to interpret the general criteria in relation to their specific knowledge. What, for example, was counted as a range, what was a river, what was an individual? This issue of rules-and-cases is a typical problem of standards that is shown in sharp relief when the criteria are new and unfamiliar. Ana Isabel Queiroz, an expert on the Portuguese sub-species of the Pyrenean desman, an odd creature perhaps described as a rare kind of large river mole, eventually decided that her taxon was ‘Endangered’.[139] But that depended on how she interpreted ‘500 km2’ of riparian habitat: ‘If we considered a band with 20 meters width [perhaps the scale of a typical Portuguese hillside brook], it will be 25.000 Km of rivers!!!’. A bryophyte expert wrote:

For example, the liverwort Herbertus borealis is not considered endangered, because the population is apparently fairly stable and it occurs within a National Nature Reserve. However, it is only known from a single site in Britain, it is not known fertile and a single accidental fire could wipe it out. Therefore it is vulnerable. Using the new proposed criteria, H. borealis would only be Susceptible, assuming that there are more than 1,000 ‘mature individuals’.[140]

Even then this may not be the case: ‘there may or may not be, depending on the concept of the individual’. After all: ‘What is an individual? This is usually clear for animals but it is a problem for many plants, particularly lower plants’.

Another concern was that the criteria would crumble in the adversarial conditions of the legal court. For Elaine Hoagland, executive director of the Association of Systematics Collections, based in Washington DC, the conflict between the scientist and the lawyer would be brutal and brief:

… an even greater difficulty in the IUCN draft is that the burden of proof is borne entirely by the scientist. In the US if not elsewhere, there would be a field day for lawyers wanting to attack the listing of a species, because, using traditional Popperian hypothesis testing, scientists can only prove what ISN’T TRUE… We can NEVER say that there are fewer than 5 populations [say], because we haven’t looked everywhere. There can always be another population lurking around the next corner. … “How hard did you look? Couldn’t there have been one you missed? The lawyer would say. “Well, if you missed one, how many more could you have missed?”[141]

‘We did not prepare this document for advocacy or litigation’, replied Mace, ‘clearly it would have been very different if we had done so, or more likely it would have been drafted by an entirely different set of people!’.[142] A fair point, but authors cannot control the uses texts are put to.

The comments and tests of the criteria were reviewed at meetings in October and November 1993.[143] Many problem areas were identified and discussed. Confusion about the application of categories, it was suggested, might be resolved by better documentation and the provision of worked examples. The role, or not, of the precautionary principle again provoked debate:

The group was divided on the extent to which the criteria did incorporate the precautionary principle, and the extent to which consequences of listing a taxon as threatened should or should not play a part in the listing process. For example, if a species would move from VU [Vulnerable] to EN [Endangered] if it were not listed as EN, should it then be listed as EN?[144]

This self-referential circularity – the way that placing a species in a category might demand a change in category – was a thorny issue. ‘On balance, it was agreed that the criteria should aim to reflect current risk levels directly’, noted the minutes, ‘This has always been accepted as the fundamental role of the categories’. Further discussion of the precautionary principle was punted upstairs to the IUCN. A similar issue was what to do with protected species, such as a rhino in a game reserve: were they Safe/Low Risk (assuming continuation of protection), perhaps Endangered (as if the protection was removed), Susceptible (if the criteria were tweaked), or a new category? Ingemar Ahlén, a Swedish rare plant specialist, had suggested ‘Care-demanding (CD)’, an idea that was taken up in discussion.[145]

One of the trickiest details of the criteria was deciding the quantitative levels of transition between categories. For all the talk of objectivity, there was, as critics maintained, an irreducible arbitrariness, for example if ‘Endangered’ referenced likely extinction over 50 (or 25, or 75, or 100, say) years, or populations of 1,000 (or 500, or 10,000, say). The group was well aware of a widespread ‘wish’ among concerned specialists ‘to intervene earlier’.[146] But longer periods also made projection based on quantitative population modelling harder, so there was a trade-off between these desired aims of objective quantification and effective conservation.

Long-lived creatures, especially sea turtles, were problematic because empirical PVA studies would take decades to complete.[147] The continued inclusion of Population Viability Analysis as part of the criteria was still a ‘controversial suggestion’. PVA had been originally included, the group now recalled, for two reasons. First, it provided ‘a quantitative target for the other criteria in terms of extinction risk, the present definitions [being] … qualitative only’, and, second, it provided for ‘species whose particular circumstances are not met by the current criteria’. So, while ‘many respondents were concerned that PVA analyses are often poorly implemented and documented, and felt that this would be a weak link in the criteria’, the review group ‘agreed to keep this criterion’.

On the question of ‘what is an individual’, some extra detail was offered. A ‘mature individual’, for example, would exclude those that were ‘non-reproductive’, ‘post-reproductive’ or those ‘behaviourally, environmentally and physiologically supressed from reproducing’. But on the question of ‘What is an individual in asexual and clonal forms?’, the panel could ‘not think of anything to improve the current statement’.

Finally, tasks of ‘validation’ were still required. While the SSC review had ‘provided sufficient information’ as tests of practicality, applicability across taxa, and appropriate levels[148], two questions remained. First, there was a task described as ‘testing for internal consistency of criteria in each category’. This seems to be recognition of the difficulty of justifying specific numbers for specific levels. The group

Agreed that the best thing to do here would be to draft a general introduction to the criteria that simply stated that each of the criteria had been independently set at what seemed to be an appropriate level, but formal justification could not be presented for the quantitative values.

The second final task was described as ‘testing for objectivity’. What would this be? ‘Ideally we need at least 30 taxa classified by at least three people using the same basic data for at least three major taxa’. Mike Maunder at Kew (for plants), Nigel Collar (for birds) and Josh Ginsberg (for mammals) were assigned the work for this test.

In December 1993 a new draft (‘version 2.1’) was written based on the review group’s work. These were presented at the IUCN General Assembly in Buenos Aires in January 1994. After a further round of comments, ‘version 2.2’ was published in Species in August 1994. Again the categories were put to the test. Finally, on 30th November 1994 the 40th meeting of the IUCN Council approved ‘version 2.3’, the document having passed the IUCN’s Policy Committee the day before.[149] The final criteria have eight categories, three of which (‘Critically Endangered’, ‘Endangered’ and ‘Vulnerable’) are given detailed, quantifiable definitions.[150] Significantly, the final criteria allowed room for both ‘hard’ PVA-style evidence but also expert judgement. For example, a taxon was to be categorised ‘Critically endangered’ when it was ‘facing an extremely high risk of extinction in the wild’ as defined by sub-criteria that included either ‘population reduction’ that could be an ‘observed, estimated, inferred or suspected reduction of at least 80% over the last [or next] ten years or three generations’, or ‘Quantitative analysis showing the probability of extinction in the wild is at least 50% within 10 years or three generations, whichever is the longer’, among others.[151] The point is that population biology was written into the heart of the criteria, but wriggle room (for example, ‘suspected’) existed too.

 

Conclusion

Conservation work is, to a degree that is surprising to an outsider who might be more inspired by what E.O. Wilson labelled ‘biophilia’, paper work. Bureaucracies require categories. A conservationist writing an action plan for a species, whether it be on a global, national or local government scale, has had to appeal to a category of threat to justify measures to be taken. In this paper I have examined how categories of threat were rewritten in the late twentieth century, so that they incorporated measures drawn from quantitative population biology, at a time of crisis: Robert May and colleagues, in 1995, estimated that extinction rates were between 1,000 and 10,000 times higher than background levels.[152] This has been the story of how a specific science created tools for getting to grips with the sixth mass extinction.

The IUCN’s new categories were indeed used. The 1996 Red List of Threatened Animals has been described as a ‘major turning point… For the first time in a single global list, the conservation status of all species of birds and mammals (rather than just the better-known or more charismatic species) was evaluated, and all the assessments were based on the … new quantitative criteria’.[153] The list has been further extended (the 2000 Red List included plants and required a CD to hold the data), embedded in all the hierarchies of scale of conservationist practice, and even informs the public through Wikipedia.[154] The CITES criteria likewise, through the listings of the appendices, constrains global sustainable trade in threatened creatures, and its categories in turn shape conservation biology.[155]

The turn from appeal to trust to appeal to numbers at times of crisis is precisely the pattern we might expect from reading Theodore Porter’s account of the historical causes of quantification.[156] However, there process was not smooth. I’ve shown that while the quantitative biology was written into conservation criteria it did so through negotiation (redrafting), drawing in some critics while perhaps side-lining others. Nevertheless, the final criteria, in both the Red List and CITES cases, were political compromises that certainly embedded quantitative biology but did so as a result of negotiation within the range of choices available.

Objectivity was a key word, whose application and meaning was debated. Objectivity was claimed but also attacked, as impractical, undesirable and for most species impossible. Objectivity has been the focus of ground-breaking scholarship in history of science.[157] Daston and Galison have shown that there are types of objectivity, and many features are displayed in my case studies. Subjectivity remained, partly as a result of concessions in negotiation but also, for example, in the irreducible arbitrariness in choices of levels. There was a play-off between time scale levels that depended on many factors, including evidence of population viability, meaningfulness in conservation campaigns, practical resources for fieldwork, and political purchase.

While an explicit appeal of the reformed criteria was that they might separate the assessment of threat from the kinds of threat, a hoped-for separation of science from politics, in fact this proved profoundly difficult, although realised to some degree. Nevertheless, the historically contingent connection of the causes of criteria reform and African-led sustainable trade reform provided an additional reason why the science was shaped in context. Interest accounts give a crude sense of what was at play. The aim of the African countries was to restart legal, international trade in product of threatened animals. To do this required a change in how species were listed in CITES appendices. And a change would be more acceptable if it could be scientifically justified. Therefore population biology was an ally. An aim of population biologists was to describe populations scientifically. To do this required resources. Resources could be obtained if population biology analyses were obligated in international treaties. Therefore the CITES reform campaign was an ally. The aim of conservationists was the preservation of the world’s wildlife, balancing political and economic demands while securing a working system of conservation. Therefore conservation bodies, in this case the SSC of the IUCN, brought together the CITES reformers and the population biologists. Such a summary is crude, but it captures some of the dynamic at play.

==

[1] Richard Leakey and Roger Lewin, The Sixth Extinction, London: Weidenfeld & Nicolson, 1996, popularised this term. The idea of a general extinction crisis gathered as broader data gathering in the mid-twentieth century made plain that what was known to have happened to individually infamous cases – the dodo, passenger pigeon, moa – were instances in a general pattern of human impact. In 1979 predicted that an extinction rate of one per day would soon become one per hour. Norman Myers, The Sinking Ark: a New Look at the Problem of Vanishing Species, Oxford: Pergamon, 1979.

[2] Mark V. Barrow, Jr., Nature’s Ghosts: Confronting Extinction from the Age of Jefferson to the Age of Ecology, Chicago: University of Chicago Press, 2009.

[3] Stephen Bocking, Ecologists and Environmental Politics: a History of Contemporary Ecology, New Haven: Yale University Press, 1997.

[4] Sharon Kingsland, Modeling Nature: Episodes in the History of Population Ecology, Chicago: University of Chicago Press, 1988, second edition 1995. Paolo Palladino, ‘Defining ecology: ecological theories, mathematical models, and applied biology in the 1960s and 1970s’, Journal of the History of Biology (1991) 24, pp. 223-243.

[5] An insiders’ account of recent history is: Curt Meine, Michael Soulé and Reed F. Noss, ‘“A mission-driven discipline”: the growth of conservation biology’, Conservation Biology (2006) 20, pp. 631-651.

[6] James Maclaurin and Kim Sterelny, What is Biodiversity? Chicago: University of Chicago Press, 2008.

[7] Kevin J. Gaston, Rarity, London: Chapman & Hall, 1994.

[8] Lawrence Busch, Standards: Recipes for Reality, Cambridge, MA: MIT Press, 2011. Geoffrey C. Bowker and Susan Leigh Star, Sorting Things Out: Classification and its Consequences, Cambridge, MA: MIT Press, 1999.

[9] Barrow, op. cit., p. 140.

[10] For the twentieth-century development of international environmental governance, see: Lynton Keith Caldwell, International Environmental Policy: Emergence and Dimensions, Durham, NC: Duke University Press, 1984.

[11] Peter Scott, John A. Burton and Richard Fitter, ‘Red Data Books: the historical background’, in Richard and Maisie Fitter (eds.), The Road to Extinction: Problems of Categorizing the Status of Taxa Threatened with Extinction, Gland: IUCN, 1987, pp. 1-5. N.J. Collar, ‘The reasons for Red Data Books’, Oryx (1996) 30, pp. 121-130, has different publication date (1964 for mammals, 1968 for birds and reptiles and amphibians).

[12] A good case could be made for the first Red Data Books (as unmodified first editions) being the rarest, important twentieth-century book. Even the British Library’s “first editions” are updated. The cause is the instruction the book gives to readers to destroy its parts: ‘To avoid confusion it will generally be found advisable to destroy original sheets removed from the volume when replacements are received’. IUCN, Red Data Book. Volume 1: Mammalia, compiled by Noel Simon, 1966.

[13] 1969 sheet added to IUCN, Red Data Book. Volume 1: Mammalia, compiled by Noel Simon, 1966. The British Library copy of the birds volume has a 1966 classification definition sheet, and it is different from the mammals 1969 one. It is not clear whether the mammals 1966 sheet was similar, probably not. IUCN, Red Data Book. Volume 2: Aves, compiled by Jack Vincent, 1966.

[14] (a) = Full species, (b) = Subspecies, E = Exotic, introduced or captive populations believed more numerous than indigenous stock, M = Under active management in a national park or other reserve, P = Legally protected, at least in some part of its range, R = Included because of restricted range, S = Secrecy still desirable, T = Subject to substantial export trade.

[15] For example here is primatologist Russell Mittermeier: ‘I still have fond memories of receiving in the mail my copy of the first Red Data Book…. I was about 20 when I first received this publication, and it had a profound impact on me. I pored over every page, reading each one dozens of times, feeling awful about those species that were severely endangered, and resolving to dedicate my career to doing something on their behalf’. IUCN, 2000 IUCN Red List of Threatened Species, compiled by Craig Hilton-Taylor, 2000, p. xi.

[16] Paul Munton, ‘Concepts of threat to the survival of species used in Red Data Books and similar compilations’, in Fitter and Fitter, op. cit., pp. 71-88, p. 77.

[17] Munton, op. cit., p. 88. To take an example from Gaston, op. cit., p. 6, W. Beebe’s definition of a rare species, while studying a quarter of a square mile of jungle in British Guiana in the 1920s was one ‘observed, but seldom’.

[18] Munton, op. cit., p. 86.

[19] For Holt in whale politics, see: D. Graham Burnett, The Sounding of the Whale: Science and Cetaceans in the Twentieth Century, Chicago: University of Chicago Press, 2012.

[20] Sidney J. Holt, ‘Categorization of threats to and status of wild populations’, in Fitter and Fitter, op. cit. pp. 19-30. Another user group can be added. M.G. Morris, director of Furzebrook Research Station, wrote: ‘ I do not entirely agree with Holt,… because of my background in insect conservation. There is a fourth group – the informal specialist (amateur as well as professional) – which needs RDBs’. GMM Box 7. Morris to Mace, 23 October 1989. Legal aspects were further addressed in Madrid, see: Michael J. Bean, ‘Legal experience and implications’, in Fitter and Fitter, op. cit., pp. 39-43.

[21] W.A. Fuller, ‘Synthesis and recommendations’, in Fitter and Fitter, op. cit., pp. 47-55, p. 51.

[22] Martin Holdgate, The Green Web: a Union for World Conservation, London: Earthscan, 1999, an insider institutional history of the IUCN, makes the World Conservation Strategy the main focus of its work.

[23] GMM Box 7. Seal to Mace, 13 April 1989.

[24] GMM Box 7. Holdgate to Lucas, copied to Rabb, Seal, Edwards, Stuart, 5 May 1989.

[25] Virginia Gewin, ‘Movers: Georgina Mace, director, Centre for Population Biology, Imperial College London, UK’, Nature 444, 240 (8 November 2006) http://www.nature.com/naturejobs/science/articles/10.1038/nj7116-240a

[26] Georgina Mace, ‘Genetics and demography in biological conservation’, Science (1988) 241, pp. 1455-1460.

[27] GMM Box 7. Fax, Stephen R. Edwards and Simon Stuart to Mace, copies to Lucas and Rabb, 9 May 1989. Three documents were suggested as starting points: the Species Action Plans as prepared by SSC specialist groups (especially the Antelope Plans, which were seen as being particularly sophisticated), Fitter and Fitter’s Road to Extinction, and Amie Brautigam’s ‘CITES: a conservation tool’, which contained criteria of note.

[28] GMM Box 7. Mace, ‘Assessing extinction threats: towards a re-evaluation of Red Data Book codes’, June 1989.

[29] The five factors were: a ‘total effective population size’ Ne of less than 500, declining population size (1% or greater decline over 5-10 years on the basis of the three independent counts), fragmentation (eg more than half of the sub-units are have Ne <= 50, ‘where the entire species is in a single population of size <= 10000, and where data suggests that the population is not self-sustaining.

[30] The twenty were: John Beddington, Graeme Caughley, Bill Conway, Nate Flesness, Tom Floose, Brian Huntley, John Lawton, Gren Lucas, Alec MacCall, John MacKinnon, Rowan Martin, Ian Newton, Bill Perrin, Hugh Quinn, George Rabb, Ulie Seal, Dan Simberloff, Mark Stanley Price, R. Sukumar and Jonah Western. Bob May was also in contact.

[31] Jane Thornback’s main concern was that the IUCN category system was used so extensively in the national Red Data Books and lists that the new proposals should be an ‘additional or alternative system to be applied rather than a replacement one’. GMM Box 7. Thornback to Mace, 3 July 1989. Thornback to Mace, 30 July 1989. The WCMC maintained a species database.

[32] Caughley though existing models ‘although being of heuristic interest, bear little relationship to reality in general and even less to specific cases. Each of these I would rate as a bad joke’. GMM Box 7. Caughley to Mace, 18 July 1989. Nevertheless, Caughley was mollified by Mace’s reply, and, while agreeing to disagree, suggested Mace publish a paper in Nature specifying what was needed from field studies to allow PVA to make accurate predictions. GMM Box 7. Caughley to Mace, 7 August 1989. GMM Box 7, Flesness to Mace, 17 July 1989.

[33] GMM Box 7. Quinn to Mace, 3 August 1989.

[34] GMM Box 7. John I. Christian (Fish and Wildlife Service) to Stuart, 23 August 1989.

[35] GMM Box 7. McNeely to Stuart, 6 July 1989

[36] GMM Box 7. Stuart to McNeely, 14 July 1989.

[37] GMM Box 7. Vernon Heywood to Stuart, 14 August 1989.

[38] GMM Box 7. Jeff Sayer to Stuart, forwarded by Stuart, 3 July 1989.

[39] McNeely again: ‘I fear that much of the work being carried out by the Specialist Groups is more often documenting the decline and eventual disappearance of species that actually generating the action necessary to conserve them … This preoccupation with extinction extends throughout the Mace paper, reflecting more general paradigm (though I hate the use of that word) of SSC in general. Instead, what is required is to enable the relevant authorities in the respective countries to collect and manage the data they require to design and implement conservation action. I see woefully insufficient effort being put into developing the institutional and data management capacity required by Third World institutions … I therefore believe it is time to come up with a amore comprehensive package of data management and field action aimed at saving the species which we are all so concerned’

[40] GMM Box 7. Huntley to Stuart, 25 July 1989.

[41] Georgina Mace, ‘Assessing extinction threats: towards a re-evaluation of IUCN threatened species categories’, August 1989.

[42] GMM Box 7. Soulé to Mace, 26 November 1989.

[43] GMM Box 7. Lawton to Mace, 8 January 1990.

[44] GMM Box 7. East to Mace, 14 November 1989.

[45] GMM Box 7. Hágsater to Mace, 23 April 1990.

[46] GMM Box 7. Martin to Mace, 15 March 1990.

[47] GMM Box 7. Des Clers to Mace, 29 January 1990.

[48] GMM Box 7. Prescott-Allen (Managing for the Future, 2nd World Conservation Strategy Project) to Mace, 1 December 1989. Bing Lucas to Stuart, 29 September 1989. Magnusson to Mace, 22 October 1989. Chivers to Mace, 29 October 1989. Pemberton to Mace, 15 October 1989. Quinn to Mace, 1 November 1989. Raven to Mace, 31 October 1989. Munton to Mace, 4 February 1990. Madulid (Manila) to Mace, 7 December 1989.

[49] GMM Box 7. Hughes to Mace, 29 November 1989. Du Toit (Harare) to Mace, 5 January 1990. Tyagi (Jodhpur) to Mace, 16 May 1990. Frazier (Heredia, Costa Rica) to Mace, 29 April 1990. For turtles, see: Alison Rieser, The Case of the Green Turtle: an Uncensored History of a Conservation Icon, Baltimore: Johns Hopkins University Press, 2012.

[50] GMM Box 7. Given, ‘Comments on paper by Georgina Mace’, undated (1989).

[51] GMM Box 7. Matola to Mace, 13 November 1989.

[52] GMM Box 7. Spinage to Mace, 28 October 1989.

[53] Jared Diamond, ‘Red books or green lists?’, Nature (24 March 1988) 332, pp. 304-305. GMM Box 7, McNeely to Mace, 16 October 1989.

[54] GMM Box 7. Groombridge to Stuart, 1 November 1989.

[55] GMM Box 7. Moore to Mace, 19 October 1989.

[56] GMM Box 7. Samways to Mace, 27 October 1989.

[57] GMM Box 7. Morris to Mace, 23 October 1989

[58] GMM Box 7. New to Mace, 24 October 1989.

[59] GMM Box 7. Holdgate to Stuart, 26 November 1989.

[60] GMM Box 7. Soulé to Mace, 26 November 1989.

[61] GMM Box 7. Gorzula to Mace, 23 November 1989.

[62] GMM Box 7. East to Mace, 14 November 1989.

[63] GMM Box 7. Holt to Mace, 28 October 1989.

[64] GMM Box 7. Klinowska to Mace, 2 November 1989.

[65] See: Barnett, op. cit..

[66] GMM Box 7. Pritchard to Mace, 1 November 1989.

[67] GMM Box 7. Frädrich to Stuart, 31 October 1989.

[68] GMM Box 7. Kemp to Mace, 19 October 1989.

[69] GMM Box 7. Fay to Mace, 7 December 1989.

[70] GMM Box 7. McAllister to Mace, 15 January 1990.

[71] GMM Box 7. Ranjitsinh to Mace, 24 November 1989. For Ranjitsinh, see: Michael Lewis, ‘Indian science for Indian tigers? Conservation biology and the question of cultural values’, Journal of the History of Biology (2005) 38, pp. 185-207.

[72] GMM Box 7. Perry to Mace, 1 November 1989.

[73] GMM Box 7. Webb to Mace, 25 October 1989.

[74] GMM Box 7. Boitani to Mace, 8 November 1989.

[75] GMM Box 7. Holt to Mace, 28 October 1989.

[76] GMM Box 7. Murray to Mace, 23 November 1989.

[77] GMM Box 7. Groombridge to Stuart, 1 November 1989.

[78] GMM Box 7. Hoffmann to Mace, 6 December 1989.

[79] GMM Box 7. Usher to Mace, 16 November 1989.

[80] GMM Box 7. Oates to Mace, 17 November 1989.

[81] GMM Box 7. Hoffmann to Mace, 6 December 1989.

[82] GMM Box 7. Short to Mace, 23 October 1989.

[83] GMM Box 7. Ginsberg to Mace, 12 December 1989.

[84] GMM Box 7. Bell to Mace, 11 January 1990.

[85] GMM Box 7. McNeely to Mace, 16 October 1989.

[86] GMM Box 7. Estes to Mace, 30 November 1989.

[87] GMM Box 7. Duffy to Mace, 31 October 1989.

[88] GMM Box 7. Moore to Mace, 19 October 1989.

[89] GMM Box 7. Pritchard to Mace, 1 November 1989.

[90] GMM Box 7. “Sandy” to Mace, 21 October 1989.

[91] GMM Box 7. Usher to Mace, 16 November 1989.

[92] GMM Box 7. Lidicker to Mace, 27 October 1989.

[93] GMM Box 7. Reece to Seal, 27 September 1989.

[94] GMM Box 7. Mace to Stuart, 20 December 1989.

[95] William F. Morris and Daniel F. Doak, Quantitative Conservation Biology: Theory and Practice of Population Viability Analysis, Sunderland, MA: Sinauer Associates, 2002, p. 3.

[96] Georgina M. Mace and Russell Lande, ‘Assessing extinction threats: towards a re-evaluation of IUCN threatened species categories’, November 1989.

[97] GMM Box 7, Fax, Lande to Mace. There is an issue with dating this document. The automatic fax dating reads ‘11/06/89’ (which is probably a setting error), the faxed letter is typed ‘October 31, 1989’ but is over-written in hand ‘Nov 6’.

[98] The addition of a measure by generations was a response to the criticism of the first draft made by R. Sukumar of the Indian Institute of Science. Sukumar noted that Asian elephant populations of even very small size could survive for over 100 years. GMM Box 7, Sukumar to Mace, 1 August 1989.

[99] Georgina Mace and Russell Lande, ‘Assessing extinction threats: toward a reevaluation fo IUVN threatened species categories’, Conservation Biology (1991) 5, pp. 148-157, p. 149.

[100] GMM Box 5. Stuart, ‘CITES Appendix listing criteria’, 20 April 1993.

[101] Peter Aldous, ‘Critics urge reform of CITES endangered list’, Nature (1992) 355, pp. 758-759.

[102] GMM Box 5. ‘The CITES Conference of the Parties. Kyoto, Japan, 2-13 March 1992. A statement by IUCN – The World Conservation Union’, 1992.

[103] GMM Box 8. Robinson to Mace, 30 July 1992.

[104] GMM Box 8. Stuart to Lawton (Centre for Population Biology, Imperial College), 24 August 1992. Stuart to Leader-Williams (Department of Wildlife, Tanzania), 24 August 1992. Stuart to Given (New Zealand), 24 August 1992. Stuart to Cooke (Centre for Ecosystem Management Studies, Germany), 24 August 1992. Stuart to MacKinnon (Hong Kong), 24 August 1992. Stuart to Luxmore (World Conservation Monitoring Centre, Cambridge), 24 August 1992. Stuart to Mace, 24 August 1992.

[105] GMM Box 5. ‘IUCN – The World Conservation Union. New criteria for listing species in the CITES appendices. A project proposal’, sets out the details of consultations, workshops and plan for writing the report, as well as funding and who’s involved. GMM Box 8 contains the papers for organising the ‘Categories of Threat’ workshop. GMM Box 2, contains papers of the ‘technical workshop’ (9-11 November 1992) and ‘applications workshop’ (12-13 November 1992)

[106] GMM Box 5. Stuart to Mace, 21 December 1992.

[107] GMM Box 5. ‘Draft. New criteria for listing species in the CITES appendices. A report and recommendations from IUCN – The World Conservation Union to the CITES Standing Committee’, December 1992.

[108] It is a curious feature of late twentieth-century international politics that “networks” opposed centres (in this case a commission).

[109] GMM Box 5. SSN, ‘CITES and the revision of the Berne criteria. A report to the Standing Committee of CITES by the NGO Working Group on New Listing Criteria’, 1 March 1993.

[110] “The status of a population or species should be up-listed … as soon as current information suggests that the criteria are met. The status of a population or species with respect to extinction should be down-listed … only when the criteria of the lower risk category have been satisfied for a time period equal to that spent in the orginal category, or if it is shown that the past data were inaccurate”.

[111] SSN, op. cit., p. 11.

[112] SSN, op. cit., p. 10.

[113] SSN, op. cit. p. 3.

[114] Some were very specific, but telling, such as the complaint about the requirement under the Appendix II criteria that ranges were measured as convex polygons. A crescent-shaped (ie concave) distribution, with small area, would have to be counted as a much larger area, requiring an endangered species to be de-listed.

[115] SSN, op. cit., p. 11.

[116] SSN, op. cit., p. 15.

[117] The Wildlife Legislative Fund of America, a pro-hunting lobby group. GMM Box 5. Dexter to Dane, 23 June 1993.

[118] GMM Box 5. Shaffer to Dane, 11 July 1993.

[119] GMM Box 5. Lamson to Dane, 15 July 1993.

[120] GMM Box 5. Gutting to Dane, 15 June 1993.

[121] GMM Box 6. Stuart, ‘Criteria for listing species on the CITES appendices and the IUCN Red List’, undated.

[122] At stake was the answer to the question ‘should there be a universal set of definitions, or are different definitions needed for different groups of organisms?’ Not the stumbling block was ‘whether or not population-based criteria can ever be applied in a practical fashion to “difficult” groups such as invertebrates or plants’.

[123] An excerpt of the proposed changes can be found in GMM Box 6. ‘Biological criteria for Appendix I’, probably authored by CITES secretariat, faxed 18 April 1994.

[124] GMM Box 6. ‘A statement by IUCN – the World Conservation Union to the Ninth Meeting of the CITES Conference of the Parties. Fort Lauderdale, United States, 7-18 November 1994’, undated (1994).

[125] GMM Box 6. IUCN, WWF and TRAFFIC, ‘The CITES listing criteria. Position statement for the Ninth Meeting of the Conference of the Parties’. IUCN and WWF had settled some important differences earlier in 1994. The position statement requested a ‘number of important amendments’, presumably reflecting changes made to the IUCN proposals as they had passed through CITES standing committee. The changes – strikethroughs and additions – are proposed in an annex.

[126] GMM Box 6. United States, ‘Criteria for amendments of Appendix I and II’, undated (1994).

[127] Quoting Soulé, Conservation Biology, Sinauer Associates, MA, 1986.

[128] GMM Box 6. IUCN, WWF and TRAFFIC, ‘The CITES listing criteria. Position statement for the Ninth Meeting of the Conference of the Parties’.

[129] GMM Box 6. Helen Corrigan, email summary of CITES Ninth Meeting of the Conference of the Parties, Fort Lauderdale, 15 November 1994.

[130] GMM Box 6. ‘CITES. Ninth Meeting of the Conference of the Parties. Fort Lauderdale… Draft resolution of the Conference of the Parties. Criteria for Amendment of Appendices I and II’, Com. 9.17, 1994. See also Com 9.24.

[131] G. Mace, N. Collar, J, Cooke, K. Gaston, J. Ginsberg, N. Leader-Williams, M. Maunder an E.J. Milner-Gulland, ‘The development of new criteria for listing species on the IUCN Red List’, Species (1993) 19, pp. 16-22.

[132] Mace et al, op. cit., p. 16.

[133] Mace et al, op. cit., p. 17.

[134] Confusingly, this paper is often dated (including in IUCN publications) as 1992. The date of publication, says Mace, was 1 May 1993.

[135] GMM Box 2. ‘New criteria for listing species in the IUCN Red List. Comments received. 16 September 1993’.

[136] GMM Box 2. Sullivan to Mace, 29 June 1993.

[137] GMM Box 2. ‘Summary of species classifications made by SSC members using the new draft IUCN criteria compared to Red Data Book (1990) classifications’, 12 July 1993.

[138] GMM Box 2. Burbidge to Stuart, 9 July 1993.

[139] GMM Box 2. Queiroz to Mace, 24 June 1993.

[140] GMM Box 2. Hodgetts, ‘New IUCN threat criteria as applied to lower plants’.

[141] GMM Box 2. Hoagland to Mace, 31 March 1993.

[142] GMM Box 2. Mace to Hoagland, 28 July 1993.

[143] GMM Box 2. ‘IUCN threatened species categories. Review meeting. October 7th 1993. Draft minutes’.

[144] GMM Box 2. ‘IUCN threatened species categories. Review meeting. October 7th 1993. Draft minutes’.

[145] GMM Box 2. Ahlén to Mace, 23 June 1993

[146] GMM Box 2. ‘IUCN threatened species categories. Review meeting. October 7th 1993. Draft minutes’.

[147] This point was forcefully made by Deborah Crouse of the Center for Marine Conservation. GMM Box 2. Crouse to Mace, 19 October 1993.

[148] Despite the minuted recognition that ‘a lot of qualitative evidence in the comments that the levels, especially for population criteria, were perhaps too low’.

[149] GMM Box 2. ‘IUCN Red List categories. Prepared by the IUCN Species Survival Commission. As approved by the 40th meeting of the IUCN Council’, 30 November 1994. See also file marked ‘Original’ in GMM Box 10. See also: http://www.iucnredlist.org/technical-documents/categories-and-criteria/1994-categories-criteria

[150] The other categories were Extinct, Extinct In The Wild, Lower Risk, Data Deficient and Not Evaluated. Lower Risk had three subcategories: ‘conservation dependent’, ‘near threatened’ and ‘least concern’.

[151] The full criteria can be found here: http://www.iucnredlist.org/technical-documents/categories-and-criteria/1994-categories-criteria

[152] R.M. May, J.H. Lawton and N.E. Stork, ‘Assessing extinction threats’, in Lawton and May (eds.), Extinction Rates, Oxford: Oxford University Press, 1995, pp. 1-24.

[153] IUCN, 1996 Red List of Threatened Animals, compiled by J. Baillie and B. Groombridge, Gland: IUCN, 1996. IUCN, 2000 IUCN Red List of Threatened Species, compiled by Craig Hilton-Taylor, Gland: IUCN, 2000, p. 1.

[154] If you look up a rare creature on Wikipedia you will see the ‘conservation status’ under the picture on the right.

[155] Charis Thompson, ‘Co-producing CITES and the African elephant’, in Sheila Jasanoff (ed.), States of Knowledge: the Co-Production of Science and Social Order, London: Routledge, 2004, pp. 67-86.

[156] Theodore Porter, Trust in Numbers: the Pursuit of Objectivity in Science and Public Life, Princeton: Princeton University Press, 1995.

[157] Lorraine Daston and Peter Galison, Objectivity, New York: Zone Books, 2007.

1/2 idea No. 13: Citizen science history

By Jon Agar, on 28 July 2021

(I am sharing my possible research ideas, see my tweet here. Most of them remain only 1/2 or 1/4 ideas, so if any of them seem particularly promising or interesting let me know @jon_agar or jonathan.agar@ucl.ac.uk!)

A simple one this: I’d like to read a history of citizen science, the involvement of members of the public in science.

Even just looking at Britain, a history might include topics such as the activities of the British Astronomical Association (active since 1890) or the Botanical Society of the British Isles (origins in the 1830s), civilian satellite tracking (similar to the ‘Moonwatch’ programme Doug Millard discusses in Cold War America in his Satellite book), the activities of the British Trust for Conservation Volunteers (started in 1959), the or the land-use surveys conducted by organised teams of children in the 1930s (and repeated in the 1970s).

That barely scratches the surface.

Nearly every branch of professional science has some parallel activity among the public. The list of interesting organisations alone would run to many pages. It would be a rich story of skills, socialising, knowledge, nature, invention, travel, class, and home workshops, laboratories and observatories.

Does such a historical study exist?

Sally Shuttleworth led a major AHRC-funded project, Constructing Scientific Communities: Citizen Science in the 19th and 21st Centuries, from 2014 to 2019, which produced good work, and showed that historical studies could complement, inform and support contemporary citizen science, such as Zooniverse. But, by design, there was a gap where the twentieth-century history should be.

For natural history there is David Elliston Allen’s The Naturalist in Britain (1976). The bigger amateur bodies, such as the BAA, have their histories. But there is, I think, nothing that covers the full range or depth of the history of citizen science, either in terms of branches of science or time.

 

1/2 idea No. 12: Landscape of jamming

By Jon Agar, on 28 July 2021

(I am sharing my possible research ideas, see my tweet here. Most of them remain only 1/2 or 1/4 ideas, so if any of them seem particularly promising or interesting let me know @jon_agar or jonathan.agar@ucl.ac.uk!)

It’s going to disappoint my STS colleagues, but this neither a proposal to study al fresco jazz nor the material culture of raspberry conserve

Nevertheless it would be a project about jamming. I hope you like jamming too.

The idea has roots in my PhD research on the history of radio astronomy, specifically Jodrell Bank, subsequently published in Science and Spectacle (1994). Understanding ‘interference’ was the key to that study, and there was a chapter on how attempts by astronomers to combat radio interference by seeking to control certain activities led to a distinct geography, zones of difference centred on the telescope. The pictures show the zones caused by denying radio frequency use to other parties, and zones caused by planning regulations.


 

These restrictions led to small, but discoverable, changes to land use. There was therefore a landscape of interference.

‘Landscape of jamming’ would develop this geographical study. Jamming is the deliberate causing of radio interference, usually, but not always, as a military countermeasure.

A very incomplete list of possible case studies of a study of the landscapes of jamming:

  • Orfordness, where the American surveillance radar Cobra Mist was closed in the 1970s due to interference. The site for Cobra Mist was then used for BBC World Service radio, so there are civil and military aspects here
  • Cyprus. An important Cold War base for UK electronic surveillance and countermeasures, with its own severe military geography
  • The ‘carcinotron’, the cancer of radars, a device for powerful broad-band radio transmission, that it the 1950s and early 1960s was seen as a severe threat to air defences.

Where else?

What analytical concepts might help?

 

 

 

1/2 idea No. 11: Scale

By Jon Agar, on 28 July 2021

(I am sharing my possible research ideas, see my tweet here. Most of them remain only 1/2 or 1/4 ideas, so if any of them seem particularly promising or interesting let me know @jon_agar or jonathan.agar@ucl.ac.uk!)

 

Thinking about scale offers a way of reinvigorating our subject.

(My colleagues and I, at STS, UCL, have been reflecting on the topic for a while, starting with a workshop back in 2015.)

Science and Technology Studies – our disciplinary junction where history and philosophy of science, studies of science policy and science communication all cross – has already paid a lot of attention to things that scale, but without ever placing scale at the centre of analysis. History of technology has gravitated to descriptions of scale – think for example of the Tensions of Europe project – without wondering why.

Think for example of:

– How laboratories act as levers between controlled microcosm and problematic macrocosm
– How microscopes and telescopes are used in the making of human-scale representations of the very small and very large
– How all instruments record, intervene and manipulate at different scales
– How units are made, and made to travel, in order to extend knowledge
– Of photographs, maps, and games, each containing the large in the small, or vice versa
– Of experiments and models
How big science differs from table-top experiment
– Of historiographical framing at levels of nation, globe, locality, city, region or person, and, increasingly, of trans-movements across them, such as transnational studies of science or scaling from region to globe
– Of microphysics and macromolecules
– And so on

Put together that’s much of the content of the journals of STS

But we don’t reflect on what they might have in common. We don’t, for instance, even have a term of art, an analytical label, for all these things that scale.

We also might note the extraordinary reach that modern science and technology has achieved.

A scientist at a desk in Pasadena can make a few changes to lines of code as represented on the screen before her – a human scale technology. Running the code makes electrons move through logic gates in the semiconductor substrate, activating signals to pass via wires and then, oscillating through the transmission aerial of Deep Space Network substation, producing electromagnetic waves that move outwards until, 18 hours later, way beyond the orbit of Neptune, electrons are nudged within the Voyager 1 spacecraft. A signal returning produces new data on another screen in Pasadena. This is the reach of modern science and the scales of intervention of modern technology. Intervening and representing at scales human but also at what the philosopher Alfred Nordmann calls the “uncanny” scales of the very very small and the very very large.

So, to summarise the argument so far: we already talk about scaling things without placing scale at the centre; we don’t have a term of art; and moving between scales of distance has been a distinctive achievement of modern science and technology. Two more quick observations before I move to the key proposition. First, scales are not givens, they are co-constructed with technologies. A nanometer doesn’t exist until a nanoscale means of measurement is articulated. Second, distance is not the only scale. Temperature is a scale, speed of rotation has a scale, credit scores have a scale, luminosity has a sale, force has a scale. Again each are co-constructed with means of measurement, intervention, standards and units (as our field knows).

This intervening power is the nature, the essence if you will, of technology. It’s present in the sublime of planetary science but also the mundane of changing gear while driving a car, or even brushing your teeth.

Let’s look at those two cases. Automobiles are means of going further then one can walk. But they operate by translating small interventions at human scales, such as turning a steering wheel or pushing a gear stick, and the automobile, as a nest set of technologies, translate these interventions, through mechanical and electrical scaling devices, into movement across road-scale spaces. Or think about what a toothbrush really is. The bits stuck in your teeth are too small, the gaps between teeth too narrow, for the human scale of fingernails, say, to dislodge them. A toothbrush translates human scale motion, via the levering scale effect, into microscale movements of brush bristles, which clean the teeth.

Technologies simultaneously imagine and substantiate scales. There is the opening here for a ‘new thinking about technology’ (a complement or challenge to interpretative sociology of technology, and a restart for philosophy of technology).

Technologies, under the instrumentalist definition (and I explain why I prefer the instrumentalist definition over the alternative cultural definitions, here in my review of Eric Schatzberg’s fine Technology: Critical History of Concept) can be said to be a designed, material, means to an end. But, and here is the proposition which I think is the crucial step, the means is always an intervention in a scale.

If you disagree, tell me a technology that doesn’t. I’ll extend the challenge: tell me a scale that doesn’t involve a technique or technology. Phrased in terms of logic, there is an ‘if and only if’ connection between the two. Usually when we have two ways of defining a subject that on the surface look different then it is a clue that there is something deeper, more interesting, to be found out.

There’s a programme of work here:

  1. Closely examine case studies of technologies, simple and complex, paying attention to all the scaling interventions at play. Simple cases (such as the toothbrush) help make the argument, but the distinctive achievements of modern science and technology lie in the massively multiple scaling by design we find in complex technologies. Think, for example, of the multiple ways an iPhone acts on scale, whether it is the microscale of an electronic logic circuit, the human scale of representation of data on screen, or the macroscale of calling a friend in another country.
  2. Seriously search for counterexamples. Any responses to the two challenges above would help.
  3. Figure out how important intention and knowledge are in technologies. Ancient metallurgy is an interesting case, since no-one would doubt it was technological, yet the interventions (effecting a change in scale of flexibility, hardness or brittleness, say in the making of a bronze artefact) do not involve knowledge – or do they? – at the level of the intervention. I’m thinking Ian Hacking – with his famous observation of the difference implied when we say we can ‘spray electrons’ – might help here.
  4. Figure out what is strange about the human scale. The idea that technologies are extensions of human organs is an old one. Jacques Ellul cited Henri Bergson’s Two Sources of Morality and Religion: humans have a “disproportionately magnified body, the soul remains as it was, ie too small to fill it and too feeble to direct it… this enlarged body awaits a supplement of soul, the mechanical demands the mystical”. Marshall McLuhan likewise emphasised seeing technologies as extensions of human senses.
  5. Be frank about how this approach engages with our established, second generation sociology of technology (ie the social shaping of technology, which implies the co-shaping of society and technology). The new approach has an implied (perhaps embraced) essentialism which is jarringly at odds with constructivist, and other, sociologies of technology.

 

 

1/2 idea No. 10: History of the Land Registry as part of a history of things we don’t know

By Jon Agar, on 27 July 2021

(I am sharing my possible research ideas, see my tweet here. Most of them remain only 1/2 or 1/4 ideas, so if any of them seem particularly promising or interesting let me know @jon_agar or jonathan.agar@ucl.ac.uk!)

Some things are hard to find out.

With Google Scholar I can find academic papers on almost any subject I choose. With Google Earth I can zoom in on any part of the planet’s surface. Bibliometrics and physical geography are two fields where it’s easy. But one type of knowledge that is hard, or at least very expensive (which amounts to the same thing) to discover is who owns what, especially who owns what land. There’s no such thing as Google Property.

The question of why we don’t know something often has historical answers: there is past of decisions taken and projects built in certain ways and not others that results in worlds in which some information is available to some people and not to others.

The history of the land, in a country such as Britain, is a case in point. I am deeply impressed by the dedicated historical research that Kevin Cahill began, and Guy Shrubsole and Anna Powell-Smith have continued to build in, the latter’s words, ‘the most comprehensive public map of land ownership in England: a modern Domesday, if you like’. Who Owns England? is an activist historical project with substance.

I would approach the topic in a different, but complementary, way.

I have long been interested in the history of information, information technologies, and how they have been built and used. My book, The Government Machine (MIT Press, 2003), traced the history of data collection and processing by the British state, whether the data was paperwork, punched card or computerised. The argument was that government has always been a processor of data, and the capacities to govern shape, and are shaped by, changing technologies of information. It was, I joked, seriously, an attempt to put the bureau back into the history of bureaucracy.

In The Government Machine I showed how the state processed information, and generated knowledge, on topics as diverse as the general population, criminals, vehicles, the location of enemy aircraft, accounts, and the pay of soldiers, sailors and civil servants. But I did not study how information was held, processed and shared, on land.

The route in would be study the available records of the history of the Land Registry. HM Land Registry has been in existence since the 1860s, and holds a record of freehold (and any leasehold of over seven years’ duration) land. The records underpin the operation of the market in property, guaranteeing claims to title. It is therefore the most significant database of who owns what land in England and Wales (there are equivalent bodies for Scotland and Northern Ireland). A search for title (including information on ownership) costs money.

Since the 1860s the Land Registry has passed through different technological forms. At each stage there were different opportunities, and therefore decisions taken, that would affect what information was held and how easily it could be shared.  These decisions would have political consequences, as decisions about how available information is about who owns what must surely have.

It would be also interesting to compare the history of the Land Registry with other cartographic and information-holding institutions, such as the Ordnance Survey and the Hydrographic Office.

I have got as far as browsing some of the records of the Land Registry held at the National Archives (in the TNA/LAR series), but there’s a lot of material and I’m weighing up whether the project justifies the effort spent.

1/2 idea No. 9: What’s your CHOICE?

By Jon Agar, on 27 July 2021

(I am sharing my possible research ideas, see my tweet here. Most of them remain only 1/2 or 1/4 ideas, so if any of them seem particularly promising or interesting let me know @jon_agar or jonathan.agar@ucl.ac.uk!)

 

This one I followed up.

I wanted to find examples of objects that have the highest significance for historical argument.

CHOICE stands for Crucial Historiographical Object in Collections or Exhibitions. I proposed that a CHOICE has two ideal features:

1) a CHOICE object reveals significant, otherwise inaccessible, knowledge about a significant historical narrative.

2) materially, either in total or in part, a CHOICE represents a ‘fork in the road’, a moment of significant historical contingency, revealing how history could have been different.

I described the concept in 2013 and invited suggestions of cases in an earlier blog post here.

It was meant to be provocative, in a productive way, not least to friends and colleagues in the museum world. I wanted examples that could unambiguously justify object-based history, especially in the study of modern periods and subjects for which there are immense documentary archival resources. But it’s fair to say the response was quite chilly. Perhaps the bar was too high. Perhaps CHOICEs don’t exist.

1/2 idea No. 8: Critique of ‘what-if?’ histories/Markov Chains

By Jon Agar, on 27 July 2021

(I am sharing my possible research ideas, see my tweet here. Most of them remain only 1/2 or 1/4 ideas, so if any of them seem particularly promising or interesting let me know @jon_agar or jonathan.agar@ucl.ac.uk!)

This idea came from a state of grumpiness. In particular, it was a response to a growing willingness among historians of science to entertain counterfactual – ‘what if?’ – histories. Examples include Peter Bowler’s Darwin Deleted: Imagining a World without Darwin (2013) and Gregory Radick’s BSHS Presidential Address ‘Experimenting with the scientific past’. Both are actually thoughtful and rather good. Hence I think my reaction was due to unfair grumpiness.

Nevertheless, before I was a historian I was trained in mathematics, and there are both simple and complicated ways of thinking critically about counterfactual histories.

The simple point is that any counterfactual sequence of historical reasoning has to be a sequence of probabilities. One way of picturing this is as a decision tree. At each node there’s a probability of taking a new path. Now take even a small sequence, well within the kinds of sequential narratives we find in history of science, say sixteen nodes. Even if the chance of taking a counterfactual path at each node was 4 times out of 5 – pretty good individual odds I think – then the chance of the final outcome would be (4/5) squared four times – less than 3%. Any realistically long counterfactual sequence results in an outcome that is deeply unlikely.

The complicated point is that there might be fields of mathematics (Markov Chains being one, although I now have reasons to doubt their applicability) which might help model historical processes, ones pictured as chains of probabilities, in perhaps interesting and useful ways. I am well aware that the very thought would make many historians run for the hills.