X Close

STS Observatory



the Office of Naval Research and science studies in the 1970s

By Jon Agar, on 16 April 2013

I was in the National Archives at Kew today researching some history of microbiological research and I came across an oddity. It was a paper written by a Martin Blank, issued by the Office of Naval Research’s London branch office in July 1975. It’s title is ‘Interdisciplinary approaches to science – bioelectrochemistry and biorheology as new developments in physiology’. There are several things that odd about it – at least surprising to me.


The paper is ONR-funded science studies – sociology and philosophy of science. The author is a physiologist who wants to know why biology is becoming more ‘interdisciplinary’ and what are the causes of the rise of ‘hybrid disciplines’. He discusses briefly the well known examples of molecular biology and the application of x-ray crystallography to questions of biological structure, but moves on to consider his two more obscure case studies – bioelectrochemistry and biorheology (the science of flow in biological systems).


It starts with an outline of Popper and Kuhn – the usual suspects. But it moves on to cite some authors who have rather dropped out of STS view. For example, there’s a discussion of the geographer RJ Horvath’s notion of the expansion of “machine space” as a master narrative of human history. As machines have exapanded they have encroached on “space that is normally occupied or utilised by people”. Horvath had a paper in Geographical Review in 1974 that I’m tempted to chase up.

There are other odd things, not least the fact that the ONR had a London branch office. Who knew? I hadn’t heard of it before, and I wonder what else it did. I’m guessing that its brief was simply to channel ONR funded research findings to a UK audience. perhaps it funded more sociology and philosophy of science?

Finally, it’s very curious that the paper crops up in the file it’s in (CAB 184 285). It’s being read in the middle of a big discussion about the military withdrawal (or at least relocation) from biological warfare research, and the consequent need to find a role for the Microbiological Research Establishment at Porton Down. In the mix too is the question about how to respond to genetic engineering. Brian Balmer and I are writing a paper on this topic, which is why I came across it. I’ve no idea, yet, if US Cold War-funded science studies is going to be part of the bigger story…

here’s the first few pages of Blank’s ONR paper:








Boxing Clever: Heinz Wolff and the Storage Theory of Civilisation

By Jon Agar, on 5 July 2012

I recently appeared on Resonance FM’s programme The Thread, taking about the history of electrical storage. It was a fun conversation with others including modern literature prof Steven Connor, historian of medicine (and gin) Richard Barnett and Imperial College energy policy expert Philipp Gruenewald. You can hear the programme here.

One story I told was of one my favourite theories of society and civilisation: Heinz Wolff’s argument ‘Society, storage and stability’. It’s rather obscure. In fact, according to google scholar, Wolff’s paper, which appeared in a book called Science and Social Responsibility, edited by Maurice Goldsmith and published in 1975 by the Science Policy Foundation, has never need cited. It would fail the research councils’ “Impact” test spectacularly. But it’s well worth retelling.

Before we start, let us get our Heinz Wolffs straight. It’s confession time! In the programme I got them the wrong way around, and it has taken a little research to sort things out.

In the 1970s there were two of them.

First there is Heinz Wolff, a psychotherapist at the Maudsley, an enormous psychiatric hospital in Denmark Hill. He also taught at University College Hospital. His biography is interesting. (The sources are an obituary here and a pair of interviews with Sidney Bloch that appeared in The Psychiatrist here and here.)

Then, on the other side of London, there is the second Heinz Wolff, the bioengineer, now at Brunel, and later famous for his thick accent and avuncular appearances on TV, especially as the presenter and impresario on the Great Egg Race, in which enthusiastic engineers built Heath Robinson contraptions to transport eggs. That was what science programming used to be like.

The 1975 paper on society, storage and stability is by the egg man not the head doctor. Confusingly, Heinz Wolff the bioengineer was also working in the medical sector. He was funded by the Medical Research Council at the Clinical Research Centre in Harrow, designing monitoring equipment for hospital patients.

At first glance the storage paper looks disconnected from his bioengineering career. However, its ideas and origin make sense in terms of its context. I have argued elsewhere – including here and here – that the ‘long 1960s’ (from the late 1950s to the mid-1970s) were a period of transition for science in society. In particular, the period was marked by a distinct turning inwards: a new critical awareness of the place and roles of science, expertise and authority more generally. The sociology of scientific knowledge, the critical science of science, is one phenomenon of this turn. Another is the radical science movement marked by the activity of groups such as Science for the People.

But another was a more Establishment response that entailed worrying about some of the same issues that fired up the radicals but coming to different conclusions. Maurice Goldsmith’s Science Policy Foundation was one of these responses. It was based in Benjamin Franklin House, near the City. Honorary fellows included Julian Huxley and Lewis Mumford. It was advised by some of the great and the good, including Peter Medawar, Hermann Bondi, Asa Briggs, John Kendrew, Derek de Solla Price, Lord Snow and Alvin Weinberg. Hot postwar specialties (molecular biology, cosmology), meet critical Big Science (Price, Weinberg) and the Two Cultures (Snow).

1973 was a year of strikes, IRA bombs in London and the Cod War. In October 1973, Maurice Goldsmith gathered friends and sympathisers, along with an eclectic bunch of others, including the ex-minister of technology Tony Benn, now in opposition, to discuss science and ‘social responsibility’.

Heinz Wolff spoke, I think, towards the end of the conference. What worried him the vulnerability of complex societies to the disruption caused by an ‘undesirable’ minority. Just as in his body ‘a very large proportion of my internal housekeeping is reduced to immunology … merely to cope with the invasion of what appears to be quite trivial numbers and masses of interfering organisms’, so the ‘more complex society becomes the more vulnerable it becomes to interference by a small number of its members’. Despite the immunological metaphor of invasion, it is clear that what Wolff is referring to here is strikers, the people who can withdraw their labour and derange the system. So, for example, he wonders out loud:

We could try to control he aberrant minority by having very draconian methods of law enforcement. We could, for instance, forbid people in certain sensitive positions to strike, and if they showed any signs of doing so we could threaten to shoot them, or just shoot them. But this, in the kind of society in which we are living, is not permissible. We must, therefore, look for different methods of ordering out society.

At this point, Wolff invites us to think outside the box, or, rather, to have more boxes:

The society I would like to see is one which involves the concept of storage, because storage and stability are almost the same thing. In the same way as a large capacitor is used in the power supply, as a storage device to smooth out variations and give stability to the circuit, so storage in the society sense us also a smoothing capacitor.

Implementing this idea would mean a wholesale change in how we think about and design our technological infrastructure. Technological systems needed to be redesigned so that storage was widely expanded and widely distributed. There was distinct Small is Beautiful aspect to Wolff’s proposals here. The idea is that if all ‘small communities’ were able to store things better, whether those things were energy, water, food, and so on, then collectively society would be more robust. He gives one example (which has recent resonances):

Some years ago in London we had a strike of 700 drivers of the tankers which deliver petrol to local garages, and London more or less ground to a halt. We had a city of 12 million people apparently at the mercy of the activities of 700 people.

So far, so reactionary.

What’s interesting about the argument is that Wolff goes further, extending the argument in to one encompassing all of human history. It becomes a Storage Theory of Civilisation. The important first steps in social and cultural development were not learning to hunt on the plains of Africa, but what happened next. What to do with all that rapidly rotting meat. In Wolff’s words:

When primitive man first roamed the earth he had no security, as such, because he had to find the food which he wanted to eat every day by hunting for it, and in consequence he developed no civilisation as such. He had no art and no culture. He then learned to store things. He was able, therefore, to decouple himself from the variability of nature, and he decoupled himself not only from the variability of food supply but also the variability of the weather. When he got cold, he stored heat in some way, either by lighting a fire or by wearing clothes or living in caves. So he increased the time constant over which he was able to operate independently from the inputs which nature provided for him.

Civilisation progressed as the means of storage became more sophisticated. The Golden Age was perhaps in the 16th century when a big household, purchasing its supplies from an annual fair, could store nearly all the resources necessary to live well and to live independently. But in recent times dependence on others for quick supply – a dependence that was chosen by foregoing the means of storage – had meant that society was increasingly unstable. The vulnerabilities revealed by the petrol strike ‘could not have happened in the 16th, 17th or 18th-centuries because people did not have this degree of continuous interdependence on each other’.

Wolff advocated a return to the ‘technological village, but a technological village with a very high storage capacity’. Build houses with big tanks for water and fuel. Pool the small town’s sewage and use the biofuel to run the ‘domestic bus services’. There was also room for ‘community greenhouses’, factories that hoarded spare parts, and even ‘sealed nuclear reactors’ just in case. Encourage ‘do-it-yourself’ and self-sufficiency. (An aside, the self-sufficiency sitcom The Good Life, was being commissioned as Wolff spoke.) But, most importantly, increase and distribute ‘storage’.

Conserve and survive.



Turing, Working Worlds and the Universal (Government) Machine

By Jon Agar, on 15 June 2012

(Paper for ACE 2012 – Turing’s 100th Birthday Party at King’s College, 15-16 June 2012)

One of the tasks of historians of science and technology is to try and understand ideas, practices and devices in relation to the wider picture of historical change. So, to take one example, we are intensely interested not only in the development of Darwin’s theory of natural selection but we also want to know its relationship to, say, Malthus’s pessimistic portrait of people and resources and in turn to the social controversies of their time. Our field’s best work – such as, in this case, the biographies written by Adrian Desmond and Jim Moore and Janet Browne – substantiates these links, revealing a Darwin who travelled through nineteenth-century spaces, geographical and intellectual, pulling together insights to reconceptualise his world .

When talking about the big ideas and changes of the twentieth century, the English mathematician Alan Turing is an attractive figure, not least because he is one of the few names that sparks immediate popular recognition and reaction. He, too, conjured ideas – the universal machine, machine intelligence – of profound consequence. As historians we want to ask, too, how he moved through the landscape, pulling together insights and remaking his world.

I’ve recently completed a book called Science in the Twentieth Century and Beyond (Polity 2012). It’s a survey, and it offers a conceptual tool to help us talk about the relationship of science to its wider settings. I argue that sciences solve the problems of “working worlds”, and also draw inspiration from them in other ways. Working worlds are arenas of human projects that generate problems. Our lives, as were those of our ancestors, have been organised by our orientation towards working worlds. Working worlds can be distinguished and described, although they also overlap considerably. One set of working worlds are given structure and identity by the projects to build technological systems – there are working worlds of transport, electrical power and light, communication, agriculture, computer systems of various scales and types. The preparation, mobilisation and maintenance of fighting forces form another working world of sometimes overwhelming importance for twentieth-century science. The two other contenders to the status of most significant working world for science have been civil administration and the maintenance of the human body, in sickness and in health. It is my contention that we can make sense of modern science once we see them as structured by working worlds. The historian’s task can be rephrased as the revelation and documentation of these ties and to describe science’s relations to working worlds.

Here I’ll give a worked case study. I think we can understand important aspects of Alan Turing’s achievements in relation to at least two working worlds. The first concerns the sources of inspiration for Turing’s universal machine. I argue, and here I draw on my other book, The Government Machine (MIT Press, 2003), that the universal machine can be examined in the light of models of civil administration. I will deal with that case at length. But I will also note here my second working world. I can interpret the changing organisation of Bletchley Park as a response to problems generated by the working world of warfare. The problem thrown into relief by the working world of 1930s warfare was speed – how to recognise and respond to the incoming bomber, how to complete novel game-changing weapons in time, and how to decrypt coded messages fast enough to be of use. The problem of speed shaped the organisation of work on radar, the atomic bomb and codebreaking. I argue in The Government Machine that Bletchley Park had to be transformed into am industrialised, almost Ford-style enterprise, with a finely arranged division of labour, very high staff numbers, an emphasis on  through-put, and innovative mechanisation at bottlenecks. Turing helped push this process. It was an industrialisation of symbol manipulation. A step further took place in radar where, again in response to the problem of speed, the organisation was reframed as an “information system”, the first modern use of this term, as far as I am aware.

However, let me now give a more detailed account of Turing in relation to the first working world.


The 1936 Paper and the Universal Machine

Both Babbage’s Analytical Engine and Turing’s description of a ‘universal computing machine’ have been claimed as computers before their time. Historians are uneasy about such claims, and have, rightly, warned against the sin of retrospective judgment. The Analytical Engine and Turing’s Universal Machine were devices of their own contexts, not forecasts of later developments. But it is not retrospective to assert that both have features – important, similar features – that stem from similarities of context. In The Government Machine (2003, pp. 39-44) I argue that we should take Babbage at his word when he described the power and mechanism of the Analytical Engine as a machine for gaining control over a ‘legislative’ and ‘executive’, that is to say he interpreted mechanical computation using the language of political philosophy. Turing’s theoretical Universal Machine should also be read as inscribed with political references. If the Analytical Engine, the Universal Machine and the computer are similar it is because they were imagined in a world in which a particular bureaucratic form – an arrangement of government – was profoundly embedded.


As a boy, Alan Mathison Turing was immersed intermittently in Civil Service culture. He was conceived in Chatrapur, near Madras, where his father, Julius Mathison Turing, was employed in the Indian Civil Service. He was born in London in 1912, after which his mother stayed in England while his father travelled back to the subcontinent. His mother rejoined Julius, leaving Alan in the hands of guardians. This pattern, in which the family was separated and reunited, marked Alan’s early life. By 1921, dedication to a civil service career had made Julius Secretary to the Government Development Department of Madras (Hodges, 1983, pp. 7-10).Even though Julius was a distant father, his employment provides one direct source for Alan’s knowledge of clerical work.


Our best account of Turing’s life and work is the biography by Andrew Hodges. He traces Turing’s long interest in the machines and the mind to the traumatic experience of losing a very close friend, Christopher Morcom, to bovine tuberculosis in 1930, when both were attending Sherborne School and preparing for entry to Cambridge University. The hope, prompted by intense remorse, that Morcom’s mind might linger after death, expressed in a paper on the ‘Nature of Spirit’ written for Christopher’s mother, was an early exploration of relationship of mind and thought in a material world, that later reoccurred in Turing’s classic works such as ‘Computing Machinery and Intelligence’ (1950). Hodges’ thesis is convincing. What I add here is an emphasis, hinted at but not developed in Hodges, on a key resource available to Turing.


While a fellow of King’s College, Cambridge, in mid-1930s, Turing had begun to attack the decidability problem – the Entscheidungsproblem – set by David Hilbert. The celebrated Göttingen mathematician had pinpointed several outstanding questions, the solution of which would, he hoped, place mathematics on a sound foundation. In particular, Hilbert hoped that mathematics could be proved to be complete, consistent and decidable. It would be complete if every mathematical statement could be shown to be either true or false; consistent if no false statement could be reached by a valid proof starting from axioms; and decidable if there could be shown to be a definite method by which a decision could be reached, for each statement, whether it was true or false. But to Hilbert’s chagrin, the Czech mathematician Kurt Gödel demonstrated in 1930 that arithmetic, and therefore mathematics, must be incomplete (or inconsistent). Gödel constructed examples of well-formulated mathematical statements that could not be shown to be either true or false. Starting with any set of axioms, there always existed more mathematics that could not be reached by deduction. This was an utterly shattering conclusion, an intellectual high-point of the twentieth century.


There remained the possibility that mathematics could still be kept respectable: perhaps, even if there existed statements that could not be proved true or false, there might still exist a method which would show (without proof) which were true and which were false – the issue of decidability. If mathematics was decidable but incomplete, then the troublesome parts could still be cut out or contained. This was the problem that fired Turing’s imagination in the summer of 1935. What is remarkable about his solution, written during a sojourn at Princeton, and published in the Proceedings of the London Mathematical Society in 1937, was that Turing not only answered the decidability question (with a ‘no’), but in doing so presented the theoretical Universal Computing Machine. (The paper was received on 28th May 1936 and read on 12th November 1936, and so it will be referred to as the ‘1936 paper’.)


The inspiration seems to have come from Turing’s mentor at Cambridge, Max Newman, who wondered aloud whether the Hilbert problems could be attacked by a ‘mechanical’ process (Hodges 1983, pp. 93-96). By ‘mechanical’ Newman meant by ‘routine’, a process that could be followed without imagination or thought. The start of Turing’s insight was to wilfully allow a slippage of meaning, and treat ‘mechanical’ to mean done by machine. Turing defined a ‘computing machine’: ‘supplied with a “tape” (the analogue of paper) running through it, and divided into sections (called “squares”) each capable of bearing a “symbol” (Turing 1937, p. 231). The computing machine could scan a symbol and move up and down the tape, one square at a time, replacing or erasing symbols. The possible behaviour of a computing machine was determined by the state the machine was in and the symbol being read. He argued that such machines, differing only by their initial m-configuration, could start with blank tape and generate numbers of a class he called ‘computable’. Although the route to answering Hilbert’s question from there is interesting, it is rather involved and beside the point for this paper. What matters is Turing’s description of a ‘universal computing machine’, which could imitate the action of any single computing machine.


To justify his definition of “computable” numbers, Turing had to show that they encompassed ‘all numbers which would naturally be regarded as computable’, that is to say all numbers expressible by a human computer – ‘computer’ most commonly referred to a human, not a machine, before the Second World War (Campbell-Kelly and Aspray, 1996, p.9; Light, 1999; Grier, 2005). Crucially, to make his case, Turing conjures up two types of human computer. They appear in an important bridging section in the logical structure of ‘On computable numbers with an application to the Entscheidungsproblem’, between the demonstration of the existence and restrictions of a universal computing machine, and its application to Hilbert’s problem.  In the first type, much of the information of how to proceed was contained in many ‘states of mind’, equivalent to many m-configurations of a machine. This was a model of a generalist: work proceeds by the manipulation of symbols on paper, but with the emphasis on the managerial flexibility contained in the large number of states of mind. This interpretation is justified when we examine Turing’s second type:


We suppose, as in [the first type], that the computation is carried out on a tape; but we avoid introducing the “state of mind” by considering a more physical and definite counterpart of it. It is always possible for the computer to break off from his work, to go away and forget all about it, and later to come back and go on with it. If he does this he must leave a note of instructions (written in some standard form) explaining how the work is to be continued (Turing 1937, p. 253).

It is at this point that we might wonder what might have prompted Turing to make the step of creatively reinterpreting Newman’s suggestion of looking for a “mechanical” method. What sort of machine is mechanical in this way, and would be known to Turing? Historically-minded commentators have scratched around this question, but their suggestions – that Turing machines are akin perhaps to ticker-tape or teletype machines are unconvincing. Hodges (1983, p. 109) has written: ‘His “machine” had no obvious model in anything that existed in 1936, except in general terms of the new electrical industries, with their teleprinters, television “scanning”, and automatic telephone exchange connections. It was his own invention’. I think, however, there is one clear and compelling candidate for Turing’s mechanical inspiration: the civil service.


The Civil Service as a Machine

Machines are means for building order in the world (Winner, 1977, 1980). Governments, too, claim this role. This overlap helps explain why components of government have been likened to machines. Nevertheless, the choice of metaphor, and its application, target and reception, has varied according to time and place. Otto Mayr, in a book completed when he was director general of the Deutsches Museum, but in fact a culmination of a life’s work exploring the topic, set out the early modern history of the interplay between real and metaphorical machines as models for governance. In Authority, Liberty and Automatic Machinery in Early Modern Europe (1986) he argues that discussion of the workings of authoritarian modes of governance correlated with the use of clockwork metaphor, while liberal concepts of order were exemplified by appeal to self-correcting mechanisms such as the balance. Crucially, there was interplay between metaphorical and real machines: the presence of real exemplars provided a rhetorical resource, while rhetorical popularity of certain types of mechanism might have encouraged their use. Mayr (1986, but see also 1981) argued that British engineers’ precocious development of self-governing mechanisms, such as steam governors, went hand in hand with the exploration of self-regulation in liberal economic and political thought.


In the mid-19th century, the steam engine provided just such a rhetorical resource to commentators such as the editor of the Economist, Walter Bagehot. In The English Constitution, a much-read 1867 analysis of the British political system, Bagehot divided political institutions into two camps, the ‘dignified’ parts, which ‘excite and preserve the reverence of the population’, and the ‘efficient’ ones by which the state ‘in fact, works and rules’. The pomp and ceremony of the monarch opening Parliament was an example of the ‘dignified’ portion; the discreet bureaucracy exemplified the ‘efficient’. The whole was a ‘machine’. And indeed it was an example of the most celebrated machine in Victorian culture: the steam engine. It had a ‘regulator’, a ‘safety valve’, and coal in the form of the ‘potential energy’ stored in the monarch herself. By drawing on the metaphor of the state-as-steam engine, Bagehot was able to argue for the essential role of monarchy: indeed it was the source of power for the whole apparatus of government.


The appeal to specific machines in descriptions of government is in fact of less significance than the more general phenomenon of casting government, and in particular the Civil Service as machine-like. In The Government Machine I argued that this construction was intimately tied to the aims and achievements of 19th century reform movements. One effect of administrative reform was to resolve issues of trust and reliability: from administrative action that was underwritten by trust in the gentlemanly status of the individual civil servant to trust that resided in the operation of a system. Reform was piecemeal; nevertheless the submission of the Northcote-Trevelyan report to both Houses of Parliament in 1854 marks a turning point. While its direct impact on administrative reform was patchy, it is of great interest here for its proposed formalisation of the division of labour, a discursive distinction between generalist “intellectual” work from that of “mechanicals”. The language would prove to be very influential.


Charles Edward Trevelyan brought to the task direct experience of the stresses induced in large-scale 19th century administration from his 1830s project, planned with his brother-in-law Lord Macaulay, of Indian Civil Service reform, as well as Irish famine relief in the mid-1840s. The “Irish business” had stretched the civil servants of the Treasury to breaking point. The problem, Trevelyan diagnosed, was that civil servants were not interchangeable: since actions were underwritten by an individual’s word, there was a “degree of precariousness in the transaction of public business which ought not to exist” (Minutes of Evidence of the Select Committee on Miscellaneous Expenditure, Parliamentary Papers, 1847-1848, 18, p.151). Three of Trevelyan’s predecessors had broken under the strain (Cohen 1941, p. 88). Trevleyan’s solution, the core of the Northcote-Trevelyan report (1854), was for the entry to upper ranks of the Civil Service to be judged systematically by examination, while the burgeoning ungentlemanly work of “copying, registering, posting accounts, keeping diaries, and so forth” would be done by interchangeable ‘supplementary clerks’, the “mechanicals”. A ‘proper distinction between intellectual and mechanical labour’ would be the founding principle of the new system. If in the natural world Darwin would replace the patronage of God with the competitive examination between species, so in the administrative world the generalists would picked not by corruptible patronage but by selection of fittest. The rest, the “mechanical” clerks would follow instructions, routinely and without thought.


Crucially, and oddly given the inclusion of gentlemen generalists at the top, the whole could – and increasingly was – cast as a machine. Indeed, the Civil Service was by the end of the 19th century, a general-purpose machine. Three sources of pressure reinforced this mechanical discourse (Agar 2003, p. 65). First, a distinction could be firmly drawn between politicians (as operators of the machine) and a supposedly interest-free, neutral Civil Service that could operate identically under both Liberal and Conservative governments. The Civil Service, discursively a machine, once set in motion would follow a predictable, reliable, and discreet path. Second, the Civil Service was labelled as a machine because, as the state grew, people were employed whom the gentlemanly elite could not automatically trust: lower-class clerks and women. Trust in the upper echelons was secured by the appeal to honourable secrecy and gentlemanly discretion; casting the “mechanical” groups as components of the machine helped resolve issues of trust by extending to the lower echelons a metaphorical reliability. Finally, labelling the Civil Service a machine appealed to a growing technocratic element in British government, not expected or even foreseen by the proponents of the Northcote-Trevelyan settlement. In particular, the metaphorical language of the government machine was wilfully and creatively reinterpreted by an expert movement of mechanisers, which gained influence in the First World War and grew to a peak of influence after the Second. The resulting social history of the mechanisation (and later computerisation) of administrative work in the 20th-century British civil service is told in detail in The Government Machine.



Back to Turing

At the heart of Turing’s 1936 paper we find this same social relationship: the generalist-mechanical split, with the generalist leaving the office and ensuring that the mechanical clerk will be trusted to follow the routine instructions. The ‘state of progress of the computation at any stage is completely determined by the note of instructions and the symbols on the tape’, he wrote (1937, pp. 253-254). Turing’s point is that such work is equivalent to the actions of a computing machine (in which case both generalist and mechanical would be part of the machine), and, in particular, that any such work would be replicable by a universal computing machine. Hodges (1983, p. 109) notes that ‘Alan had proved that there was no “miraculous machine” that could solve all mathematical problems, but in the process he had discovered something almost equally miraculous, the idea of a machine that could take over the work of any machine. And he had argued that anything performed by a human computer could be done by a machine’. It helps us understand the seemingly miraculous, if we remember that government – especially the civil service – had previously been constructed as a machine capable of general-purpose action. My claim is that the civil service model of generalists and mechanicals, and therefore the working world of civil administration, framed Turing’s imagination of the Universal Computing Machine.


I do not think we should be surprised that Turing’s figure of a human computer is positively bureaucratic, not only in its the attention to instruction-following and the manipulation of symbols on paper, but also in its mobilisation of the generalist-mechanical split. If he knew anything about what his father did at work, then the pattern would have been a resource at hand to think by. In fact there is no need to speculate about the exact train of influence, since Turing’s post-war project to literally build a stored-program computer within the British civil service provides further evidence to support my claim.


In early 1946, Turing’s proposal for a “Proposed electronic calculator”, written between October and December 1945 (Copeland 2000, 2005), was circulating Whitehall. This project, soon called the Automatic Computing Engine (ACE, note the nod to Babbage in “Engine”), was approved and work began, with Turing on the staff, at the National Physical Laboratory (NPL). The NPL was part of the civil service, best known for its metrological work. While the project suffered from obstacles, delays and interruptions, prompting Turing to return to academia, a simplified version, the Pilot ACE, ran its first programs in May 1950 (Copeland, 2005). Turing’s proposal opens with a discussion of speed. In particular, he notes that historically calculation was only partly mechanised: ‘Calculating machinery in the past has been designed to carry out accurately and moderately quickly small parts of calculations which frequently recur’ (Turing 1946, p. 2). Now, ‘instead of repeatedly using human labour for taking material out of the machine and putting it back at the appropriate moment’, the materialisation of the Universal Computing Machine, ‘all this will be looked after by the machine itself’. (It is worth here recalling the words of Colonel Partridge, an early mechaniser of the Civil Service, here: the ‘aim of every alert organisation’ should be the replacement of human by mechanical labour.) A detailed description of components of the machine follows. But what is the ‘this’ that will be ‘looked after by the machine itself’? What, precisely, will this new machine do? In Turing’s own words, the ‘Scope of the Machine’, could be stated:


The class of problems capable of solution by the machine can be defined fairly specifically. They are those problems which can be solved by human clerical labour, working to fixed rules, and without understanding (Turing 1946, p. 14).


If, as I suggest, Turing’s theoretical outline of the Universal Machine in his 1936 paper was framed by his understanding of generalist-mechanical relations, then here, in Turing’s proposal for a real machine, we can see that its capacities, too, were coterminous with clerical labour.



Government departments were closely involved in the first experimental stored-program computers of the late 1940s, as patrons (the Ministry of Supply provided funds for computers necessary for atomic weapons research), sites (the Department of Scientific and Industrial Research’s National Physical Laboratory housed Turing’s ACE project), and as forums for discussion. As the name of the one of these – the Brunt Committee on High-speed Calculating Machines – suggests, early computers were mostly considered narrowly as mathematical aids. Turing gave a list of problems capable of solution by the ACE. Most are numerical, but at least one is administrative (‘To count the number of butchers due to be demobilised in June 1946 from cards prepared from the army records’) although Turing wrote that this work would be more efficiently done by Hollerith punched-card techniques. This reminds us that the computer qua universal machine was not a historical inevitability but instead a category that had to be recognised, articulated and accepted. That work was a historical process, not a single event, and drew on, as one of its sources Turing’s arguments and language.


Therefore, while when civil servants confronted stored-program in the 1950s there was a sense of looking into a mirror, there was no immediate recognition of the reflection, nor should there have been. Indeed the mechanisers, an expert movement based in the Treasury, were keen promoters of all kinds of mechanical methods, including punched-card machines, as their interest in electronic computing as aids to administration began to grow. The punched-card work involved the production of explicit series of instructions, telling the machine operator at each stage what to punch and what to check. It was not a big step from such “programmes” for humans and machines of the earlier type to “programs” for electronic computers. The Treasury mechanisers learned much from the experimental tribulations at the NPL as well as at the more straightforward success at Lyons & Co, the tea shop business where electronic computers called LEO (Lyons Electronic Office) for business data processing and calculation were built in the early 1950s. ‘Programs can be prepared’, concluded on LEO manager in a report to the Brunt committee, ‘for many clerical jobs to be carried out by automatic calculators’ (Agar 2003, p. 304). Edward Newman of the ACE team at NPL went further:


it seems possible that in due course computers will do the country’s routine clerical work, most of the work in fact of a deductive character…When used for suitable purposes, and in particular for processes which are essentially serial, some automatic computers are very fast, up to 100,000 times as fast as man. Their potential power is thus very great (Newman 1953).

Indeed, he went on:

it is unlikely that there will ever be any great reduction in the time needed for programming machines, since the organisation of a complex job whether it is done by human clerks, by punched cards, or by high-speed computers is bound to be a long business, and a programme is only a coded form of this organisation.


The grasping of Newman’s point – a programme is only a coded form of organisation – was a pivotal moment of self-awareness: if bureaucracy was the original rule-based, general-purpose machine, then this moment, in the British context, was when the civil servants saw past the unfamiliar technical guise and recognised their own mirror image. An ambitious programme of computerisation followed in the late 1950s and 1960s which continued until technical expertise was outsourced in the 1970s and 1980s.

And then we have the wider world. It used to be said that you were never more than 15 feet away from a rat. It is true to say that we are now never more than a short distance from a universal machine, which is now miniaturised and embedded. There are computers on our desks, in our phones and in our cars. If you follow my argument you can see this as a dispersal and materialisation of a specific social relationship, that of between the generalist and mechanical civil servant. It is an extraordinary decentralisation and multiplication of what was once state power.



Jon Agar, The Government Machine: a Revolutionary History of the Computer, Cambridge, MA: MIT Press, 2003

Jon Agar, ‘Mechanical metaphor, mechanization, and the modern British civil service’, Jahrbuch für europäische Verwaltungsgeschichte 20 (2008), pp. 119-138.

Jon Agar, Science in the Twentieth Century and Beyond, Cambridge: Polity Press, 2012.

Walter Bagehot, The English Constitution, London, 1867

Martin Campbell-Kelly and William Aspray, Computer: a History of the Information Machine, New York, 1996.

Emmeline W. Cohen, The Growth of the British Civil Service, 1780-1939, London, 1941

B. Jack Copeland , ‘The Turing Test’, Minds and Machines, Springer, 2000

B. Jack Copeland, ‘Origins and development of the ACE project’, in B. Jack Copeland, Alan Turing’s Automatic Computing Engine: the Master Codebreaker’s Struggle to Build the Modern Computer, Oxford, 2005, pp.37-91.

David Alan Grier, When Computers Were Human, Princeton, 2005

Andrew Hodges, Alan Turing: the Enigma of Intelligence, London: Unwin, 1985 (1983).

Jennifer Light, ‘When computers were women’, Technology and Culture 40 (1999), pp.455-483

Otto Mayr, Authority, Liberty and Automatic Machinery in Early Modern Europe, Baltimore, 1986.

Otto Mayr, ‘Adam Smith and the concept of the feedback system: economic thought and technology in 18th century Britain’, Technology and Culture 12 (1971), pp.1-22.

Minutes of Evidence of the Select Committee on Miscellaneous Expenditure, Parliamentary Papers, 1847-1848, 18, p.151.

Edward Newman, ‘The use and future of automatically controlled computers’, O&M Bulletin (April 1953). TNA T 222/1303.

Stafford H. Northcote and Charles E. Trevelyan, The Northcote-Trevelyan Report, reprinted in Public Administration 32 (1954), pp.1-16, originally signed 23 November 1853 and published as House of Commons Parliamentary Paper 1713 in February 1854. Northcote and Trevelyan also reported to the Treasury in 1853 in a paper ‘The Reorganisation of the Permanent Civil Service’, that was published, along with a letter from Benjamin Jowett and criticisms of the paper, in 1855.

Alan M. Turing, ‘On computable numbers, with an application to the Entscheidungsproblem’, Proceedings of the London Mathematical Society, Series 2 42 (1937), pp.230-265.[1]

Alan M. Turing, ‘Proposed electronic calculator, 1946’, TNA DSIR 10/385. Reprinted in a slightly different form in B. Jack Copeland, Alan Turing’s Automatic Computing Engine, Oxford, 2005, pp.370-454.

Langdon Winner, ‘Do artifacts have politics’, Daedalus 109 (1980), pp.121-136.

Langdon Winner, Autonomous Technology: Technics-out-of-control as a theme in political thought, Cambridge, MA, 1977.

(This blog piece is an adaptation of Agar 2008, which in turn drew on Agar 2003 as its prime source.)

Agendas in the Historiography of Science

By Jon Agar, on 8 June 2012

I’ve been reviewing a very good edited collection of the historian of computing Michael Mahoney’s papers. Mahoney died in 2008, but he left behind a series of papers filled with good advice to historians of science. One of his best tips is that we should pay attention to what he calls the “agenda” of a discipline or specialty if we want to understand it.

Here’s Mahoney’s definition and development of the idea:

“In tracing the emergence of a discipline, it is useful to think it terms of its agenda, that is, what practitioners of the discipline agree ought to be done, a consensus concerning the problems of the field, their order of importance or priority, the means of solving them, and perhaps most importantly, what constitute solutions. Becoming a recognized practitioner of a discipline means learning the agenda and then helping to carry it out. Knowing what questions to ask is the mark of a full-fledged practioner, as is teh capacity to distinguish between trivial and profound problems. Whatever specific meaning may attach to ‘profound’, generally it means moving the agenda forward. One acquires standing in the field by solving the problems with high priority, and especially by doing so in a way that extends or reshapes teh agenda, or by posing profound problems. The standing of the field may be measured by its capacity to set its own agenda. New disciplines emerge by acquiring that autonomy. Conflicts within a discipline often come down to disagreements over the agenda: what are the really important problems? Irresolvable conflict may lead to new disciplines in the form of separate agendas.

As the Latin root indicates, agendas are about action: what is to be done?”

(Michael Mahoney, ‘Computer science: the search for a mathematical theory’, originally published in Krige and Pestre (eds), Science in the 20th Century, reprinted in Histories of Computing, Harvard University Press, 2011, p. 130)

I thought it worth abstracting this long quotation for two reasons.

First, this seems to be a really good, concrete proposal for what makes a discipline, and who is considered to be a valid practitioner. That’s a useful historiographical tool for historians of science.

Second, we might like to ask, as historians of science, what is our agenda? What do we agree ought to be done? What are the problems of the field? How should we rank them?


The Geekocratic Tendency

By Jon Agar, on 25 May 2012

A new social movement in science is gathering, and it is time to give it a name. It’s a mutation of an older tradition of scientific lobbying, but it has new features and deserves some analysis.

What is it?

Let’s describe its components and features. We would include organisations like Science is Vital, which formed to campaign against cuts in science during the present austerity. There are campaigns, such as Simon Singh’s anti-libel wars against the chiropracters.

There is a cultural wing – we are thinking of the spectrum of mutual regard that spans Ben Goldacre, Brian Cox, Wired magazine in the UK, and the comedians Robin Ince and Tim Minchin. These are the geek-tastic “skeptics”, all with an immense following via social and other media which extends now into real world grassroots events such as Skeptics in the Pub.

But the geekocratic tendency is not just about love of the values of science, or protecting the resources and funds for science, or even securing greater respect for science or worrying about public understanding. None of these features are especially unique, indeed we can identify many as having long historical roots tied up with the professionalisation, and popularisation, of science. The novelty is partly a stronger political focus (and especially a fetishisation of evidence-based policy-making), but presented as an ‘outsider’ view while being articulated in fact by well-connected ‘insiders’.

The key text is Mark Henderson’s Geek Manifesto, published this month. It not only serves as a rallying cry for all these groups but also as an attempt to reappropriate the term ‘geek’. Yet, Mark is no DIY science activist. He is Head of Communications at the Wellcome Trust, one of the leviathans of UK science funding. ‘Geek’, as Steve Cross has pointed out, has changed its meaning quite radically.

This social movement has other features. It has its own heroes (teenagers bravely standing up against anti-vivisectionists) and villains (homoeopathists, creationists, politicians who don’t ‘get’ science). It is self-policing – the criticism of the recent ‘death of British science’ campaign is an interesting example. The embarrassment and derision stemmed from the fact that this is a social movement that is much more politically savvy than some of grassroots.



A social movement needs a name so that it can be tracked, discussed and perhaps supported or criticised. Hauke Riesch has described “science activists”. Proposals kicked around included “grabby geeks”, “science botherers”, the “SciNet” (a la SkyNet of Terminator fame), and the “Geek Establishment”. We like the “Geekocracy”.

So how do we account for this social movement? Is it merely, for example, a manifestation of a network? Certainly social media have provided older science lobby networks with a visibility and an immediacy of communication which is new. Perhaps without social networks the constituency of this social movement would remain local or individual and largely invisible. We in STS@UCL will be watching with interest.


(This post combines the collective thoughts of STS’s SASsy group.)

Probability of nuclear accidents in a country with 19 nuclear reactors

By Thomas Rose, on 28 March 2011

There are a lot of studies on the probability of accidents in a nuclear power plant. As far as I understand they use methods of risk analysis to calculate the failure probability of the nuclear reactor.
Here I tried a very simple empirical approach: We know the number of nuclear power reactors in the world, we know (probably) the number of severe accidents up to now, so we can calculate the empirical failure probability of a single reactor per year. Thus we are able to calculate the probability that no reactor in the world, in UK or in another country, will have an accident within the next 5, 10 or 20 years.Or that at least on reactor will fail severely. This can be done by using the Poisson distribution.
Up to now there are at least 4 reactor accidents on INES scale 5 or more. Chernobyl (1986) is the only one on level 7, Three Miles Island(1979), Windscale (1957) are on level 5. Also the present Fukushima accident (or accidents?) is level 5, at least at the moment (27.03.2011). On level 5 there are some more accidents and on level 6 is only one, but they were in other nuclear facilities, not in power reactors. One could argue that Windscale was not a civil but a military reactor, but then in Fukushima there is probably more than one reactor involved. So the number of 4 severe accidents seems quite reasonable.
The number of nuclear reactors worldwide increased drastically from 1955 until 1988, from which date the number is nearly constant. Up to the Fukushima accident there were 443 reactors operating worldwide.
By a simple graphical piecewise interpolation of the number of reactors per year a total of 15.000 reactoryears can be estimated. This crude number should be sufficient for the present purpose.
So the probalilty for one severe accident per reactoryear (ry)is
If there are N reactors in operation, the Poisson distribution gives the probability for x severe accidents within the next y years. In order to apply the Poisson distribution the expected mean number of accidents m within this time has to be estimated:
Then the probality to have x accidents when we expect a mean value of m accidents is given by
Thus the probality for no accident is (x=0)
and the probality for at least one accident is
Regarding the worldwide situation for the next 20 years, the number of reactors is 443, we expect an average number of severe accidents
so 2.34 accidents within any period of 20 years somewhere in the world. The probability for one or more severe accidents worldwide is
How is the situation for a single country? We simply have to count the number of reactors within this country and calculate the respective reactoryears.
                                      World        UK               US            D
reactors                       443           19              104           17
reactoryears             8860        380            2080         340
mean # acc                 2.34         0.100       0.549         0.089
p(≥1                              90.36%    9.55%      42.26%       8.59%
On the average more than 2 accidents are expected worldwide, the probality for at least one accident ist 90% worldwide, more than 9% for the UK and more than 40% for the US.
Do you think these estimations are reasonable? Do you think a 9% probability for a Chernobyl or Fukushima accident in the UK within the next 20 years is acceptable?

I am looking forward to your comments.

Ten Problems in History and Philosophy of Science

By ucrhmpa, on 3 December 2010

The second meeting of the History of Science reading group discussed Peter Galison’s paper “ten problems in history and philosophy of science.” It was a larger group, of philosophers historians and sociologists: whatever the merits of Galison’s specific proposals, his name and his example certainly bring different disciplinary approaches together.

The ten problems described in the paper are as follows:

Problem 1 – what is context – how does a contextual explanation work?

Problem 2 – purity and fundamentality – what counts, at different times, as a ‘pure’ science?

Problem 3 – historical argumentation – when the foucs is on scientific practices, what are the concepts tools and procedures needed at a given time to construct an acceptable scientific argument?

Problems 4 & 5 Fabricated Fundamentals –

  1. Making things: it is increasingly hard to separate the made from the found

  2. What should we make?

In Galison’s view this seems to be a new change brought about by nanotechnology and genetic-modification. We wondered about what the role of historical analysis would be for understanding fabricated funamentals: would we want to relate genetic modification to earlier examples of animal and plant breeding, or nanotechnology to synthetic chemistry or other techniques of making new natural things? What is the role of the ‘break’, the terrible beauty which is born with new techniques of modification? And what does this have to do with the ethical turn of problem five? Do history and philosophy of science have meaningful things to say about the ethics of artificiality in this sense? Some of us felt it would be hard to base ethical arguments about fabricated fundamentals on an historical basis.

Problem 6 Political Technologies – privacy/ surveillance // what is politics of these new technologies

Problem 7 Locality: what do microhistories towards, or add up to?

We recognised this as ‘bar-talk’ for historians and philosophers of science – that very local studies have proliferated and it is sometimes hard to see how to fit them into broader arguments and patterns. Some of us felt this was less of a problem for historians, who are perhaps more willing to contextualise – than micro-philosophers.

Problem 8 – globality – what aspects of scientific practice simply do not reduce to the local

We wondered what aspects were not captured by local studies: perhaps legal and regulatory aspects, as a broader field of action? And the way in which the claims made about different historical periods fit together, or fail to.

Problem 9 Relentless Historicism – is it possible to write a history and philosophy of science in which the story told truly I historical?

Problem 10: Scientific Doubt.


From Imitation to Invention

By ucrhmpa, on 3 December 2010

The STS Department’s new History of Science reading group has had two meetings so far, and very lively discussions. In the first, we discussed Maxine Berg’s paper “From Imitation to Invention”. Berg’s paper takes the question of ‘imitation’ as part of a consumer-focused and global history of processes of industrialisation. It builds on De Vries’ notion of the ‘industrious revolution’, a reorientation of households towards newly intensified forms of production, and the connection of this to the consumption of new commodities. It is an account of the interrelations between consumerism and production. Berg rescues the idea of imitation as a creative process – not mere copying, but a translation and transformation of materials and processes which she traces across the world. Inventive imitation is an ancient and vital part of the trade of processes and ideas: from bronze age skeuomorphs – objects or features which copy the designs of similar artefact in different materials – described by archaeologists, to the attempt to develop a distinctively English style of luxury in imitation of Chinese and Japanese imports.

With no economists in the group we weren’t completely clear on the use of hedonic, another technical term which Berg employs. The way Berg seems to intend it is to describe the different pleasure-giving properties of commodities, and to analyze commodities as bundles of these proprties. We were taken with this emphasis on the experience of physical properties, and the central place this account gives to aesthetics.

Some of us felt parts of Berg’s argument were over-extended and too Eurocentric. For example, Berg argues that the changes in materials in Etruscan burial goods led to the spread of new luxury techniques. This is perhaps a partial and luxuriant way to analyze the burial goods themselves, which in their own cultural setting are ‘essential luxuries’, indispensable to the rites of mourning and the dead. The meaning and attraction of materials – and their transmission from one place to another – may have cultural meanings which are poorly captured by economic histories and a very sunnily aesthetic approach. We speculated on other ways in which you might capture other properties and relations between human bodies, cultures, and materials – this suggested some future reading, but we came to no conclusions.

STS doomed?

By Jon Agar, on 29 November 2010

The pessimistic view of anti-cuts protesters…

Science on barricades I

By ucahksm, on 19 October 2010

Photos say more than a thousand words, though their meassage may depend on what we want to see (see Susan Sontag’s ‘Regarding the pain of others’). Here are more than three thousand words from the rally for Science at Whitehall on the 9th. The blog does not allow me to publish more than one picture per go, for some reason.

Deep concentration