Mammoths and Magdalenians: A Summer in the Ice Age

By Josephine Mills, on 30 August 2017

This August I swapped the busy streets of London for the beaches and archaeology on the island of Jersey, which is located close to the Normandy coastline in the English Channel. Jersey is an incredibly archaeologically-rich location with many prehistoric and historic sites of interest. The project that I work with is called Ice Age Island (twitter hashtag: #iceageisland) and we are particularly focused on archaeological deposits from the Middle Palaeolithic, Upper Palaeolithic and Mesolithic sites on the island.

Jersey is currently an island but during cooler periods of the Pleistocene (Ice Age), when sea level fell, it regularly became part of the larger Channel landmass of the Continental Shelf. The Ice Age Island team are a group of researchers and students focused on unravelling this part of the island’s past by exploring terrestrial sites on the island but also investigating how humans used the wider, now submerged, landscape that surrounded it. Currently work is focused on reconstructing geological and topographical mapping of the Continental Shelf and also working out how artefacts from the sites we are excavating and studying can contribute to our understanding of behaviour in the offshore area.

3 maps

Fig 1. A map showing the location of Jersey and the other Channel Islands. Numbered locations refer to Middle Palaeolithic sites in the area: 1= Le Rozel, 2= La Cotte a la Chèvre, 3 = La Cotte de St. Brelade, 4 = Mont Dol

My PhD is focused during the Middle Palaeolithic of Jersey when Neanderthals visited the island, particularly the site of La Cotte de St. Brelade on the south-west coast. La Cotte is a sheltered cave-like area formed within a granite t-shaped ravine system and has recorded Neanderthal activity intermittently from around 240,000 to 40,000 years ago. The work that I do helps to contribute to the understanding of how Neanderthals interacted with the offshore landscape, particularly where they got the raw material needed to make stone artefacts from. The use of stone, particularly flint, to make tools was key to survival for Neanderthals. Tools made from stone were used for activities like hunting, food processing, and preparing hides—giving access to food, nutrition, defence, and warmth. Flint presence and absence fluctuates in the deposits at La Cotte and by understanding differences in the types of raw materials in the different archaeological layers we hope to reconstruct a model of the availability of different sources; this will in turn shed light on Neanderthal behaviour in the wider landscape. So this summer I’ve swapped three museums for one museum archive in Jersey where I have been analysing flint tools from La Cotte!

High def CSB

Fig 2: View into the t-shaped ravine system at La Cotte de St. Brelade (own image)

However Ice Age Island is made up of multiple strands of investigation and the most active is the archaeological training dig and field school, which UCL has been involved in for seven excavation seasons. The site is run by Dr Ed. Blinkhorn (Senior Archaeologist UCL and Archaeology South East), and supported by supervisors and students from different universities. Excavation is focused on the Magdalenian (a time period spanning 17,000 – 12,000 years ago) site of Les Varines, where the archaeological deposits are dated to approximately 14,000 years ago.


Fig 3. Students and staff working at the excavation during the site open day (own image)

The current interpretation is that the site represents a hunter-gatherer camp where Magdalenian people lived for some parts of the year. Located at the sheltered apex of a wide valley system the site would have been a prime position for tracking migrating animals, like deer and horse, in the landscape below. However the story of Les Varines has changed through the seven years the site has been excavated. Initially the artefacts that were found appeared disturbed by post-depositional processes and were not excavated in the original position they were left by the Magdalenian people. This limited the behavioural inference that could be taken from the artefacts. However as new trenches and test pits were put in, guided by geophysical survey, areas of intact Upper Palaeolithic archaeology were discovered!

This year has been a really exciting excavation season as the type of finds being made have diversified from flint tools to include a mammoth tooth and multiple constructed hearth areas. This allows a much more varied picture of life at the site, demonstrating different activities the Magdalenian people were carrying out!


Ed Blinkhorn and Letty Ingrey (UCL/ Archaeology South East) talking about the potential mammoth tooth find


UCL MSc Palaeolithic Archaeology and Palaeoanthropology students Leah and Fiona discussing uncovering the Magdalenian Hearth Features

So if you haven’t seen my in a museum for a while – that’s what I’ve been up to!

Normativity November: Defining the Archaeological Normal

By Stacy Hackner, on 23 November 2016

This post is part of QMUL’s Normativity November, a month exploring the concept of the normal in preparation for the exciting Being Human events ‘Emotions and Cancer’ on 22 November and ‘The Museum of the Normal’ on 24 November, and originally appeared on the QMUL History of Emotions Blog.

DSC_0745by Stacy Hackner


The history of archaeology in the late 19th and early 20th centuries can be read as the history of European men attempting to prove their perceived place in the world. At the time, western Europe had colonized much of the world, dividing up Africa, South America, and Oceania from which they could extract resources to further fund empires. Alongside this global spread was a sincere belief in the superiority of the rule of white men, which had grown from the Darwinian theory of evolution and the subsequent ideas of eugenics advanced by Darwin’s cousin Francis Galton: not only were white men the height of evolutionary and cultural progress, they were the epitome of thousands of years of cultural development which was superior to any other world culture. According to their belief, it was inevitable that Europeans should colonize the rest of the world. This was not only the normal way of life, but the only one that made sense.

In modern archaeology, we let the data speak for itself, trying not to impose our own ideas of normality and society onto ancient cultures. One hundred years ago, however, archaeology was used as a tool to prove European superiority and cultural manifest and without the benefit of radiocarbon dating (invented in the 1940s) to identify which culture developed at what time, Victorian and Edwardian archaeologists were free to stratify ancient cultures in a way that supported their framework that most European=most advanced. “European-ness” was defined through craniometry, or the measurement and appearance of skulls, and similar measurements of the limbs. Normality was defined as the average British measurement, and any deviation from this normal immediately identified that individual as part of a lesser race (a term which modern anthropologists find highly problematic, as so much of what was previously called “race” is culture).

In my research into sites in Egypt and Sudan, I’ve encountered two sites that typify this shoehorning of archaeology to fit a Victorian ideal of European superiority. The first is an ancient Egyptian site called Naqada, excavated by Sir William Matthew Flinders Petrie in the 1890s. Petrie is considered the founder of modern, methodological archaeology because he invented typology – categorizing objects based on their similarity to each other. As an associate and friend of Galton and others in the eugenics circle, he applied the same principle to categorizing people (it’s likely that his excavations of human remains were requested by Galton to diversify his anthropometric collection). Naqada featured two main types of burials: one where the deceased were laid on their backs (supine) and one where the deceased were curled up on their side (flexed). Petrie called these “Egyptian” and “foreign” types, respectively. The grave goods (hand-made pottery, hairpins, fish-shaped slate palettes) found in the foreign tombs did not resemble any from his previous Egyptian excavations. The skeletons were so markedly different from the Egyptians – round, high skulls of the “Algerian” type, and tall and rugged – that he called them the “New Race”. Similarities, such as the burnt animal offerings found in the New Race tombs, present in Egyptian tombs as symbolic wall paintings, were obviously naïve imitations made by the immigrants. However, the progression of New Race pottery styles pointed to a lengthy stay in Egypt, which confused Petrie. Any protracted stay among the Egyptians must surely have led to trade: why then was there an absence of Egyptian trade goods? His conclusion was that the New Race were invading cannibals from a hot climate who had completely obliterated the local, peaceful Egyptian community between the Old and Middle Kingdoms.

Of course, with the advent of radiocarbon dating and a more discerning approach to cultural change, we now know that Petrie had it backwards. The New Race are actually a pre-Dynastic Egyptian culture (4800-3100 BC), who created permanent urban agricultural settlements after presumably thousands of years of being semi-nomadic alongside smaller agricultural centres. Petrie’s accusation of cannibalism is derived from remarks from Juvenal, a Roman poet writing centuries later. It also shows Petrie’s racism – of course these people from a “hot climate” erased the peaceful Egyptians, whose skulls bear more resemblance to Europeans. In actuality, Egyptian culture as we know it, with pyramids and chariots and mummification, developed from pre-Dynastic culture through very uninteresting centuries-long cultural change. Petrie’s own beliefs about the superiority of Europeans, typified by the Egyptians, allowed him to create a scientific-sounding argument that associated Africans with warlike-invasion halting cultural progression.

The second site in my research is Jebel Moya, located 250 km south of the Sudanese capital of Khartoum, and excavated by Sir Henry Wellcome from 1911-1914. The site is a cemetery that appears to be of a nomadic group, dating to the Meroitic period (3rd century BC-4th century AD). The site lacks the pottery indicative of the predominant Meroitic culture, therefore the skulls were used to determine racial affiliation. Meroe was seen as part of the lineage of ancient Egypt – despite being Sudanese, the Meroitic people adopted pyramid-building and other cultural markers inspired by the now-defunct Egyptian civilization. Because many more female skeletons were discovered at this site than male, one early hypothesis was that Jebel Moya was a pagan and “predatory” group that absorbed women from southern Sudanese tribes either by marriage or slavery and that, as Petrie put it, it was “not a source from which anything sprang, whether culture or tribes or customs”. Yet, the skulls don’t show evidence of interbreeding, implying that they weren’t importing women, and later studies showed that many of the supposed female skeletons were actually those of young males. This is another instance of British anthropologists drawing conclusions about the ancient world using their framework of the British normal. If the Jebel Moyans weren’t associating themselves with the majority Egyptianized culture, they must be pagan (never mind that the Egyptians were pagan too!), polygamous, and lacking in any kind of transferrable culture; in addition, they must have come from the south – that is, Africa.

Sir Henry Wellcome at the Jebel Moya excavations Credit: Wellcome Library, London.

Sir Henry Wellcome at the Jebel Moya excavations
Credit: Wellcome Library, London.

These sites were prominent excavations at the time, and the skeletons went on to be used in a number of arguments about race and relatedness. We now know – as the Victorian researchers reluctantly admitted – that ruggedness of the limbs is due to activity, and that a better way to examine relatedness is by examining teeth rather than skulls. However, the idea of Europeans as superior, following millennia of culture that sprung from the Egyptians and continued by the Greeks and Romans, was read into every archaeological discovery, bolstering the argument that European superiority was normal. Despite our focus on the scientific method and attempting to keep our beliefs out of our research, I wonder what future archaeologists will find problematic about current archaeology.


Addison, F. 1949. Jebel Moya, Vol I: Text. London: Oxford University Press.

Baumgartel, E.J. 1970. Petrie’s Naqada Excavation: A Supplement. London: Bernard Quaritch.

Petrie, W.M.F. 1896. Naqada and Ballas. Warminster: Aris & Phillips.

What is bread?

By Stacy Hackner, on 16 March 2015

Lara by Lara Gonzalez






When I started my research on the 9000-year-old bread from Çatalhöyük (Turkey) I began to wonder about bread related facts that we hear on daily basis. Many questions came to my mind: what do we understand by bread? How many types of bread are there? Is bread the base of every diet around the world? Is bread good for us? I realised most of these questions were mainly related to things that people believe to be true more than to real scientific evidence.

For instance, when I asked my friends or family what they think bread is, they all gave the same type of answer: bread is something made of wheat that we eat every day; bread is the base of our diet. However, that is not completely true. Bread is not equally understood by everybody or every society in the world. While for the majority of the European society, bread normally refers to a leavened and baked food made of wheat flour, water and salt, for a person belonging to a South American or Indian community that definition might seem rather limited and incomplete. Depending on the area of the world we are, bread would mean very different things for the people living there. There are two factors to consider when looking into this: the local plant resources available in the different areas of the world and the cultural implications of bread such as cooking traditions, identity or cosmology.

As part of my experience as a Research Engager at UCL, explaining what I understand by bread is not an easy task. When they ask me what my doctoral research is about and I answer that I study archaeological bread from Turkey, I can see the look on the visitor’s faces. However, that look is mainly the result of preconceived notions I have mentioned earlier. As a consequence, my immediate response would be: “What I mean by Bread is not what we buy in Tesco!”, however there is a good reason for this bias: the majority of visitors are likely to have been brought up in Europe or the so called Western Societies where bread is considered to be a baked and leavened flour preparation. From the plant resources point of view, there are basic differences on the ingredients that people chose to make bread. We actually find that they vary quite a lot among the different areas of the world. While in Europe our bread products are mainly made of wheats species, in places like Africa, Asia and Central and Southamerica other plant resources are primarily consumed in bread form. Many diverse types of bready preparations are made of local plant species being millets in Africa and West and South Asia, rice in South-East Asia and corn (maize) in the New World the main ones.

At this point of the engagement, if the botanical explanation has not helped to sustain my point yet, here is when I use the cultural explanation exposed by Dorian Fuller, Professor in Archaeobotany at UCL, and Michael Rowlands (2011) who have defined the Bread Culture. According to these researchers, by looking at the archaeobotanical and archaeological record, we can distinguish two marked areas in the world in relation to bread products. They propose a clear frontier which would separate bread cultures from those which cannot be characterised as such. We see a cultural area formed by the Mediterranean, North Africa and West Asia, where wheat and barley species started to be cultivated 11000 years ago, where grinding stones and milling tools have been recovered in high quantity and with evidence of milling traditions from the Epipaleolithic. On the other hand, we see a completely different area of the world where bready products have not been present until modern times. South East Asia, with China as the centre, presents a distinctive pattern that varies from the Western world. The communities on these areas did not base their diet on cereal but they did on rice and millets (7000-6000BC). Then is when the look on the faces of visitors at the UCL museums really starts to make sense to me: We live in the Bread Culture! We are part of it!

Now is when my task as a defender of the deconstruction of the term bread is to explain to visitors that many types of cereal foods should therefore fall in the category of bread. For example if we were in Ethiopia, bread would mainly made of teff and it would not contain yeast or any other raising agent. However, if we were in China, rice cakes and millet noodles would be considered the ‘bread’ of the society and the base of our diet. Also, these types of breads would be differently cooked. Here is when we get into the diversity of bread making. Retaking Fuller and Rowland’s (2011) arguments, while in Western Europe we see a cooking tradition with main focus on baking and grilling, which would have relation to a cosmology in which the smoke and fumes feed the Gods; in South East Asia we see a boiling and steaming tradition. This would be directly connected with a cosmology around the ancestors, in which the descendants’ aim is to keep these close to them, the same way boiling and steaming are cooking traditions which keep ingredients together (Levi-Strauss).

After this explanation, I start to see that some of the visitors start to see bread with other eyes and ‘engage’ in a conversation about how food is differently understood in different parts of the world and in different periods of History. Then is when I feel I have reached my target as a research engager at UCL:  I have created an exchange of ideas and thoughts that benefit my research, and hopefully I will have made people wonder and think the next time they choose to buy baguette or pita bread!



Fuller, D. Q. & Rowlands, M. 2011. Ingestion and Food Technologies: Maintaining Differences over the long-term in West, South and East Asia. In: Wilkinson, T. C., Sherratt, S. & Bennet, J. (eds.) Interweaving Worlds: systematic interactions in Eurasia, 7th to 1st millennia BC. Oxford: Oxbow Books.


Question of the Week:

How tall were ancient Egyptians?

By Misha Ewen, on 21 January 2015

Misha Ewen

This was the first question I was asked on the first day in my new role as a Student Engager in the Petrie Museum. The visitor in the Petrie came up with this when he was looking at some of the sandals – of different sizes – which have survived and are displayed in the museum’s collection. One sandal appeared to me to be around a modern-day size 9 or 10, so I guessed that those living in ancient Egypt ranged in similar stature to ourselves. I then directed the visitor towards some of the head rests in the collection, which, in what might be deemed a very ‘unscientific’ way, we also made some guesses about the size of ancient Egyptians, although we wondered whether we were looking at objects made for adults or children.

© Petrie Museum, UCL.

© Petrie Museum.


It seems that our guesses were not too far from some archaeological findings. In doing some research I learned that in under 2000 years the Egyptian population changed from being ‘an egalitarian hunter-gatherer/pastoral population to a highly ranked agricultural hierarchy with the pharaoh as the divine ruler’. One study suggested that from the Predynastic period (5000 BCE) until the start of the Dynastic period (3100 BCE) the stature of Egyptians increased, which was followed later by a decline (up to 1800 BCE). They put this down to an intensification in agricultural production which meant that access to food was more reliable, but they also suggested that it reflected the beginnings of social ranking. The decline in stature in the Dynastic period was the result of even greater ‘social complexity’, when there was greater difference in access to food and healthcare: essentially, the gap between the rich and the poor had widened.

Head rest with hieroglyphics. © Petrie Museum.

Nevertheless, over this whole period they found that the mean height (of their sample of 150 skeletons) was 157.5cm (or 5ft 2in) for women and 167.9cm (or 5ft 6in) for men, quite like today. What is quite different is that compared with the average difference of 12-13cm between men and women found in modern populations, in ancient Egypt it was only 10.4cm. This came as a surprise to the researchers, as men in ancient Egypt were thought to have benefitted more (than would be so today) from preferential access to food and healthcare. But their findings probably reflect the fact that the status of women in ancient Egypt was relatively high compared to other ancient societies.

Like today, there are many variables which would have determined the height of an ancient Egyptian. First off, like modern-day England, Egypt was an ethnically diverse and cosmopolitan society where body shapes and sizes of all kinds would have been found: there was no single build, nor hair or skin colour. And also quite like today, the wealth and social status of an individual played a part in determining their physique (although in twenty-first century England being overweight is more often linked to deprivation rather than wealth). All through human history we can see multiple factors – from disease, social status, access to food and cultural aesthetics (to name a few) – determining our physique. As we continue to ponder the ideal, healthy body-type in our own society, I’m sure we’ll continue to look back and ask questions about our predecessors.

For the cited archaeological study, click here.

Movement Taster – Movement in Premodern Societies

By Stacy Hackner, on 14 May 2014


The following is a taster for the Student Engagers’ Movement event taking place at UCL on Friday 23 May. Stacy, a researcher in Archaeology, will be discussing movement through the lens of biomechanics.

by Stacy Hackner

Imagine you’re in the grocery store. You start in the produce section, taking small steps between items. You hover by the bananas, decide you won’t take them, and walk a few steps further for apples, carrots, and cabbage. You then take a longer walk, carefully avoiding the ice cream on your way to the dairy fridge for some milk. You hover, picking out the semi-skimmed and some yogurt, before taking another long walk to the bakery. This pattern repeats until you’re at the checkout.

What you may not realize is that this pattern of stops and starts with long strides in between may be intrinsic to human movement, if not common to many foraging animals. A recent study of the Hadza, a hunting and gathering group in Tanzania, shows that they practice this type of movement known as the Lévy walk (or Lévy flight in birds and bumblebees). It makes sense on a gathering level: you’ve exhausted all your resources in one area, so you move to another locale further afield, then another, before returning to your base. When the Hadza have finished all the resources in an area, they’ll move camp, allowing them to regrow (for us, this is the shelves being restocked). This study links us with the Hadza, and the Hadza with what we can loosely term “ancient humans and their ancestors”.

Diagram of a Levy walk.

Diagram of a Levy walk. Credit Leif Svalgaard.

It’s unsurprising that the Hadza were used to examine the Lévy walk and probabilistic foraging strategies. As they are one of the few remaining hunter-gatherer groups on the planet, they are often used in scientific studies aiming to find out how humans lived, ate, and moved thousands of years ago, before the invention of agriculture. The Hadza have been remarkably amenable to being studied by researchers investigating concepts including female waist-to-hip ratios, the gut microbiome, botanical surveys, and body fat percentage. Tracking their movement around the landscape using GPS units is one of the most ingenious!

Much of the theoretical background to my work is based on human movement around the landscape. The more an individual moves, the more his or her leg bones will adapt to that type of movement. Thus it is important to examine how much movement cultures practicing different subsistence strategies perform. The oft-cited hypothesis is that hunter-gatherers perform the most walking or running activity, and the transition to agriculture decreased movement. An implicit assumption in this is that males, no matter the society, always performed more work requiring mobility than females. This has been upheld in a number of archaeological studies: between the Italian Late Upper Paleolithic and the Italian Neolithic, individuals’ overall femoral strength decreased, but the males decreased more; over the course of the Classical Maya period (350-900 AD), the difference in leg strength between males and females decreased, solely due a reduction in strength of the males. The authors posit that this is due to an economic shift allowing the males to be free from hard physical labour.

However, I take issue with the hypothesis that females always performed less work. The prevailing idea of a hunting man settling down to farm work while the gathering woman retains her adherence to household chores and finding local vegetables is not borne out by the Hadza. First, both Hadza men and women gather. Their resources and methods differ – men gather alone and hunt small game while women and children gather in groups – but another GPS study found that Hadza women walk up to 15 km per day on a gathering excursion (men get up to 18 km). 15 km is not exactly sitting around the camp peeling tubers. Another discrepancy from bone research is the effect of testosterone: given similar levels of activity, a man is likely to build more bone than a woman, leading archaeologists to believe he did more work. Finally, hunting for big game – at least for the Hadza – occurs rarely (about once every 30 hunter-days, according to one researcher) and may be of more social significance than biomechanical, and berries gathered account for as many calories as meat; perhaps we should rethink our nomenclature and call pre-agricultural groups gatherer-gatherers or just foragers.

For a video of Hadza foraging techniques, click here.

For a National Geographic photo article, click here.



Marchi, D. 2008. Relationships between lower limb cross-sectional geometry and mobility: the case of a Neolithic sample from Italy. AJPA 137, 188-200.

Marlowe, FW. 2010. The Hadza: Hunter-Gatherers of Tanzania. Berkeley: Univ. California Press.

O’Connell, J and Hawkes, K. 1998. Grandmothers, gathering, and the evolution of human diets. 14th International Congress of Anthropological and Ethnological Sciences.

Raichlen, DA, Gordon, AD, AZP Mabulla, FW Marlowe, and H Pontzer. 2014. Evidence of Lévy walk foraging patterns in human hunter–gatherers. PNAS 111:2, 728-733.

Wanner, IS, T Sierra Sosa, KW Alt, and VT Blos. 2007. Lifestyle, occupation, and whole bone morphology of the pre-Hispanic Maya coastal population from Xcambó, Yucatan, Mexico. IJO 17, 253-268.

Taxonomies of Bones and Pots – The Petrie Pops up at the Grant Museum

By Niall Sreenan, on 10 March 2014




On the 13th of February, objects and ideas from the Petrie Museum of Egyptian Archaeology “popped-up” in the neo-Victorian surrounds of the Grant Museum of Zoology in an event that sought to explore some of the ways in which archaeologists and biologists both engage in the act of classification and taxonomy. I attended this event in the guise of ‘Student Engager’, with the intention of sharing with visitors my own research on Darwinian evolution and literature. More on this later, but for now, it is perhaps a good idea to examine the procedure of taxonomy itself, as it relates specifically to biology and archaeology.

Taxonomy (from the Greek ‘taxis’ meaning ‘order and ‘nomos’ meaning ‘knowledge’) refers broadly to the act of (unsurprisingly) the ordering of knowledge and to the examination of the principles that underlie these logically ordered schemata. It is this process of ordering that the proponents of both ancient Egyptian archaeology and zoology practice – albeit in subtly different ways.

Taxonomy in biology, as we understand it now, is widely considered to derive from the work of the Swedish 18th Century naturalist Carolus Linnaeus. His seminal work in taxonomy, most famously given expression in Systema Naturae (1735), has bequeathed to us a method of biological classification, the finer details of which are now scientifically inaccurate, that to some extent lives on in popular thought (think of the game “Animal, Plant, or Mineral”) and whose basic outline persists in biology to this day. Linnaeus divided the natural world into three distinct types or ‘Kingdoms’, animal, plant, and mineral, and divided each of these into classes, with those categories dividing in turn into orders, familiesgenera, and species.

Regnum Animale – Systema Naturae
Click to zoom

Today, biological classification requires a more complex, nuanced system, in which there are six ‘kingdoms’, subsumed under the category of three ‘domains’ of life and take into account another category of life within this schema, the phylum. Moreover, the Linnaen classificatory system has given way to the Darwinian ‘Tree of Life’ as the dominant visual representation of the natural world, as evidenced by the current exhibition in the British Library that examines the nature of the visual representation of science: one installation in particular allows us to explore with touchscreen techonology, in great detail, the natural world through navigating this ‘Tree of Life’ and has a profoundly disorienting effect on our image of our human selves as the centre or pinnacle of the natural world. Homo sapiens in this model occupy an obscure, diminutive branch amongst the great, entangled, and monstrously abundant foliage of other species.

‘Tree of Life’ – Origin of Species, 1859

Yet despite the insistence of the dynamic, non-hierarchical schema of the Darwinian ‘Tree of Life’, the basic hierarchical ranking of Linnaean taxonomy persists (necessarily) in contemporary biology. Without this “ordering” of knowledge, with its embedded hierarchies and rankings, biological classification would be a disordered chaos. How then does this taxonomic procedure play out in other fields, distinct from biology?

While the significance of taxonomy is evident (and its history well known) in biology, the taxonomic aspects of archaeology are perhaps not as widely appreciated. The case of Flinders Petrie, the founder of the Petrie Museum of Egyptian Archaeology at UCL, provides us with a particularly apposite opportunity to excavate the function and significance of taxonomic classification in archaeology. Amongst Petrie’s most crucial contributions to archaeology are his schematic, chronological sequences of ancient Egyptian pottery. Petrie, faced with an abundance of predynastic pottery, collected along the Nile at various locations, needed a method of placing these pots in chronological order. Unlike the distinctly un-scientific methods of some of Petrie’s predecessors for whom the act of archeology was partially mythic in its reconstruction of the past, Petrie paid specifically close attention to the morphology of the objects with which he was faced and treated these morphologies as, what we call today, data-sets. Based on the assumption that the morphologies of pottery changed over time (in an almost evolutionary fashion), Petrie was able, via a complex mathematical process, to systematize and sequence the chronological order creating, in effect, a taxonomical method of dating pots. Both Petrie’s skill as a mathematician and diligence as an archaeologist is underlined here as, today, this statistical approach is undertaken using computers only – the complex arithmetic required simply taking too long for humans. The consequences of Petrie’s methodology – what is now called seriation – on the discipline of archaeology were and still are profound. The process is an important method in contemporary archeology and, in particular, it revolutionized our understanding of the timeline of Egyptian history, all through his taxonomic analysis of pottery.


It was these very histories and methods of taxonomy in biology and archaeology that provided the crucial link between the Petrie and Grant Museums, and in turn provided the subject matter and theme of the event which I attended. Visitors were invited to engage in a number of taxonomic activities: reconstructing the shattered sequence of Flinders Petrie’s classification of pots, re-connecting and correctly identifying the scattered skeletal remains of a gorilla, and placing ancient Egyptian pots in their correct chronological order. These acts of reconstruction and identification, of the re-assembling of broken sequences and structures, stress the importance of taxonomy and classification in both biology and archaeology – disciplines whose methods, goals, and data-sets overlap in the fields of anthropology and osteo-archaeology. Moreover, it invites the participant to engage in the very ordering of knowledge out of disorder that underlies the procedure of taxonomy (albeit without the complex statistical mathematics). By the same token, taking part in a re-construction allows us to consider the implications of breaking up and disrupting these structure, of the deconstruction of systems of ordered classification.

My own research as a PhD student in UCL explores the way in which reading the work of Charles Darwin can provide us with new critical and theoretical insight into works of literature – and how reading works of literature have a reciprocal effect on our readings of Darwin. I referred to the Darwinian ‘Tree of Life’ earlier in this blog and it is to this I return now. Previously, I suggested that the tree model of life provided us with a more nuanced and dynamic method of ‘ordering’ our knowledge of the natural world than that of the Linnaean classificatory system. This view of the natural world, I stated, was profoundly decentring: homo sapiens are removed from our self-appointed place at the top of the hierarchy of species. And yet, does Linnaeus specifically place humans at the top of a hierarchy? Looking at the table of species published in Systema Naturae, there is a distinct echo of the deliberate subordination of all animals to the supremacy of man that has occurred in older visual and conceptual models of the natural world. Linnaeus’ table puts us (‘anthropomorpha’) at the top of the table, bestowing us with the title of “Number 1”. This repeats the schema that has been passed down through Western thought since Aristotle in the Scala Naturae, or the “Chain of Being” in which humans existed only below God in the grand hierarchy of all species. In this context, Darwin’s arboreal structuring of the natural world, with the human race being afforded no greater a position than a mouse or a mollusc is defiantly radical, shunning the accepted wisdom of all naturalist and biological thought since Ancient Greece. Moreover, the categories in Darwin’s model of life, the taxonomic leaves that sit upon the branches of genetic connection, are themselves unstable and subject to constant change.

Chain of Being – Rhetorica Christiana 1579


Darwin himself, writing in The Origin of Species in 1859 wrote:


“Naturalists try to arrange the species, genera, and families in each class, on what is called the Natural System. But what is meant by this system? Some authors look at it merely as a scheme for arranging together those living objects which are most alike, and for separating those which are most unlike; or as an artificial means for enunciating, as briefly as possible general propositions …”


Darwin, it seems, wishes to question the very substance and authority of this natural system, inaugurated by Linnaeus. For him, it is a necessary evil – an unwanted and ‘artificial’ ossification of the dynamism and change inherent in biological life that is nevertheless required for order and brevity in biology.


Yet, he goes further in his critique of classificatory systems:


“…we shall have to treat species in the same manner as those naturalists treat genera, who admit that genera are merely artificial combinations made for convenience. This may not be a cheering prospect; but we shall at least be freed from the vain search for the undiscovered and undiscoverable essence of the term species…”


The term species for Darwin is an arbitrary linguistic imposition on an organic form that by definition is never stable and always in a state of flux. He, like those who criticize the worst vagaries of cultural, linguistic, and philosophical postmodernism, sees in this biological relativism something to be maligned – a state of existential flux that results only in the melancholia of unstable and incomplete knowledge. Yet, in this he sees the prospect of an end to a “vain search”: the search for “essence”, linguistic, philosophical, and biological. Rather, Darwin would assert, we should instead attend to the ‘entangled bank’ of differences, to which he refers towards the end of Origin of Species, that make up the natural world rather than vainly trying to categorise and essentialise all of organic existence.

What if anything, does this literary critical digression of mine have to do with the taxonomical procedures of Flinders Petrie? Or, indeed, with his chronological series of pots? It might be worth asking, instead, what do Petrie’s series of pots tell us about the humans that made them? Or indeed about the relationship these humans had to the form of the pots that they created? It is my job as a literary critic to focus on ‘difference’ in literature and art; to attend in detail to the specific and subjective detail of single works of culture and their relationships with history, with other works of art, with texts, and with the individuals that created them. Taxonomy, the ordering of knowledge, on the other hand has a tendency to subordinate difference at the hands of “order”. On an instrumental level, this ordering process is vital for biology to operate – we could not simply throw our hands up give in to the desire to say that, say, tigers are contingent and temporary balls of matter in a state of constant flux and, therefore, should not be named! Yet, when it comes to the creative products of human hands and minds, there is an ethical dimension that should be attended to: to subordinate difference in art and culture is to subordinate individual difference in human life.

Francis Galton, a cousin of Darwin, and a colleague and acquaintance of Petrie, who worked at UCL in the early 20th C saw in the science of taxonomy, underlined by a misreading of Darwinian evolution and heredity, the potential to categorise and order human society according to his terms. He differentiated between the European races and the ‘lower races’ of man, creating, in effect, a taxonomy of human life. Not only is this scientifically incorrect, but the very act of naming and of creating order in doing so does a violence to those whom it names – restricting their existence to a category in which variance and difference within that group cannot be registered and asserting an unquestioned hierarchy of races, similar to that of the Systema Naturae. A distinctive passage from Galton’s work Hereditary Genius elucidates his views on the hierarchies of life:

“The natural ability of which this book mainly treats, is such as a modern European possesses in a much greater average share than men of the lower races. There is nothing either in the history of domestic animals or in that of evolution to make us doubt that a race of sane men may be formed who shall be as much superior mentally and morally to the modern European, as the modern European is to the lowest of the Negro races”

Galton is considered to have inaugurated the pseudo-scientific practice of eugenics, a discipline which espoused the improvement of human ‘stock’, the creation of a ‘race of sane men’, through selective breeding and other methods, the very name of which, today, can only be used in pejorative terms due to its racist foundations and invidious implications in the 20th Century.

These are the dangers of taxonomy when applied, misguidedly and without reflection, to human culture. Certainly, it is not my argument that taxonomy or order is inherently wrong. It was, however, my intention at the event held in the Grant Museum on the 13th of February to try and disrupt and disorder the usual ways in which we think about taxonomy in all fields.

Interestingly, Darwin, a scientist, like Galton, gives us an elegant means of resisting the worst vagaries of taxonomical essentialism. However it is only through a detailed and sensitive reading of Darwin’s writing that this can emerge from his texts. In other words, in order to see Darwin as holding ambivalent and philosophically interesting views on taxonomy and classification, it was necessary to ignore the taxonomical classification of Darwin as “Scientist” and “Biologist” and instead attend to the specific literary detail of his work.

Works cited and further reading (in no particular order):

Charles Darwin, On the Origin of Species, ed. by Gillian Beer, New edition (OUP Oxford, 2008).

Francis Galton, Hereditary Genius, (London: Macmillan, 1892).

Debbie Challis, The Archaeology of Race: The Eugenic Ideas of Francis Galton and Flinders Petrie, (London: Bloomsbury, 2013)

Michel Foucault, The Order of Things, (London: Routledge, 1989)

How are Ancient Nubians Like Astronauts?

By Stacy Hackner, on 6 January 2014

Stacy Hackner_Thumbnail By Stacy Hackner

Some respected individuals (supervisors, mentors, parents) have advised me to not get distracted by the primrose paths that crop up during a PhD. These primrose paths are always deliciously exciting, offering opportunities to study wonderful new topics that one can justify as marginally related to one’s thesis and therefore potentially of use. Of all the primrose paths I’ve followed, I never expected the most relevant one to be about astronauts.

Credit: Wikipedia Commons

Sudanese pyramids. (Wikimedia commons.)

My thesis explores ancient Nubia, the region that is now northern Sudan, from roughly 3000 years ago to medieval times. Unlike their contemporaries, the Egyptians, the Nubians didn’t have a system of writing until the Meroitic period (300 BCE-400 CE), a time of Egyptianizing influence. They built small pyramids and imported Egyptian goods, attesting to the influence of their famous northern neighbors. In the absence of writing (and even in the presence of texts, as humans tend to play with the truth), archaeologists try to build a picture of the ancient society using physical evidence, including human remains. Fortunately, the dry climate and sandy soil usually result in excellent bone preservation, allowing me to identify differences in bone shape. But wait – let me back up a little.

We aren’t entirely sure how bone works. There are two types of cell responsible for bone maintenance – osteoblasts and osteoclasts. Osteoblasts build bone, and osteoclasts take it away. The body is highly responsive to changes in activity, and bone is constantly updating itself accordingly. The general principle is that your body thinks what’s happening now will happen forever. Think about when you’re running a race: it’s hard to start because your body’s been used to standing still and needs some time to amp up your heart rate and muscle contractions. When you finish the race, your heart keeps pounding for a minute or two because it hasn’t quite got the signal to stop running yet. Bone works in a similar way. In response to physical stress, bone will accumulate more osteoblasts to strengthen itself. Each step makes tiny microfractures, which tells the bone “Come on, I’m breakin’ here! Give me more strength!” and the osteoblasts pile on. In the absence of activity – during periods of prolonged sitting or lying down – the osteoclasts come in to take away unnecessary bone. “You’re not using this one, right? Then we can send the calcium somewhere else.” The thing is, scientists don’t know all the signals involved in this process. We know what happens, but not the channels of communication. I like to imagine bone cells having little conversations with each other, but clearly it’s all on a neurochemical level we haven’t yet discovered.

The concept of bone building in necessary areas is keenly presented in studies of elite athletes. In a study by Haapsalo et al (1998) of young female tennis players, the players gained significant bone mineral content in the bones of their dominant (forehand and serving) arm. When the authors looked at a control sample (girls who did not play tennis), there was minimal or no difference between their arms; there was also minimal difference between the nondominant arms of the tennis players and the controls.

Another study, by Shaw & Stock (2009), examined differences between university athletes who competed in either hockey or long-distance running. They found significant differences in the actual shape of the tibia (shin bone) due to the physical stress of these activities. The tibias of the long-distance runners were more elongated front to back while the tibias of the hockey players were more even side-to-side, showing a distinct difference in the direction of activity in these sports. Clearly, osteoclasts were being sent to the bone locations these athletes needed them most: for runners, the front, and for hockey players, the sides. It is important to point out that many of the studies investigating activity and bone growth look at adolescents, since their bones develop until the end of puberty. After that, it seems to take a lot more effort to alter bone shape and density.

Credit: wikipedia

Her bones are losing mineral content by the minute! (Wikimedia commons.)

And what about the other side of the cycle? The osteoclasts? That’s where the astronauts (and cosmonauts) come in. The constant pounding of our feet against the floor keeps our bones as strong and dense as they need to be for everyday use. In zero gravity, though, there’s no pounding, just the occasional soft push off the wall of the space station. The osteoblasts don’t have any stress to react to, and the osteoclasts assume the extra bone is useless, so it starts to be resorbed. It helps that astronauts are some of the most-studied individuals on our planet (and definitely the most studied off the planet!). During spaceflight, urine calcium output is found to increase, indicating that bone is being sapped of minerals, and post-spaceflight bone scans reveal a condition called “spaceflight osteoporosis”, similar to the osteoporosis experienced on earth – but the bone density is only lost from the legs, feet, and hips, all weight-bearing regions, including an 8% loss in four months (compared to 1% loss per year for earth-bound sufferers of osteoporosis). The upper body and head generally remain unaffected (unless one of the astronauts was a tennis player, of course). One study found that after a “long-duration” spaceflight of 4-6 months, it took up to three years for astronauts to recover the bone density they’d lost in space (Sibonga et al, 2007). It makes one really appreciate gravity.

So how do I apply this to ancient populations? The data from astronauts indicates that most of the density lost is from trabecular bone, from the internal core, rather than from the outside. This means that even if ancient bones have lost density due to age, either before death or after burial, it’s likely to have happened from the inside out and thus the external shape should remain intact. This gives me more confidence in figuring out what kinds of activities they performed during adolescence, which in most cultures was when young people started to take up adult cultural roles. I hope to compare the shape of the bones of Nubians to those of athletes and to other populations whose activities are known in order to draw a better picture of their society.


For an amusing (but factual) look at the craziness that is astronaut and cosmonaut research, check out Mary Roach’s “Packing for Mars: The Curious Science of Life in the Void” (Norton & Co, 2010).

Haapasalo, H, P Kannus, H Sievänen, M Pasanen, K Uusi-Rasi, A Heinonen, P Oja, and I Vuori. 1998. Effect of long-term unilateral activity on bone mineral density of female junior tennis players. Journal of Bone and Mineral Research 13/2, 310-319.

Shaw, CN and JT Stock. 2009. Intensity, Repetitiveness, and Directionality of Habitual Adolescent Mobility Patterns Influence the Tibial Diaphysis Morphology of Athletes. AJPA 140, 149-159.

Sibonga, JD, HJ Evans, HG Sung, ER Spector, TF Lang, VS Oganov, AV Baulkin, LC Shackelford, and AD LeBlanc. 2007. Recovery of spaceflight-induced bone loss: bone mineral density after long-duration missions as fitted with an exponential function. Bone 41, 973-978.


The Staffordshire Hoard: Defining “Treasure”

By Gemma Angel, on 14 January 2013

  by Felicity Winkley






In July 2009 in a field in Hammerwich, Staffordshire, Terry Herbert stumbled upon the largest hoard of gold and silver Anglo Saxon metalwork ever found. Comprising over 3,500 items, the Staffordshire Hoard – as it is now known – totals some 5.0 kilos of gold, 1.4 kilos of silver and 3,500 cloisonné garnets.[1] It is almost exclusively war-gear, save for two or possibly three crosses, the largest of which has been folded. The sheer quantity of finds in the hoard and their exquisite workmanship have caused archaeologists to speculate that what we know of 7th century metalwork may have to be completely rethought – and all this from a find that was literally sitting on a field surface due to be ploughed into oblivion. Today the Staffordshire Hoard is back in the news: last November, again after the field had been recently ploughed, a team from Archaeology Warwickshire found a further 91 associated objects[2], and just 2 weeks ago, 81 of these were ruled to be treasure at a coroner’s inquest.[3]

A selection of objects from the Staffordshire Hoard, including the folded cross.
Photograph © Portable Antiquities Scheme.


I had already been thinking about treasure, in light of the Researchers in Museums project, as a potential point on which to engage people with my research at the Petrie Museum. ‘Treasure’, I thought, could be useful for capturing visitors’ imagination, evoking chests of gold doubloons on desert islands, or hoards of jewels guarded by slumbering dragons. Indeed, as a noun the Oxford English Dictionary defines treasure as: ‘a quantity of precious metals, gems, or other valuable objects’.[4] A quantity, gosh!

At first glance, however, the crowded cases at the Petrie Museum might not seem to boast many objects which we would hasten to describe as treasure – certainly few of the portable antiquities match the criteria laid out in the UK’s Treasure Trove legislation (but more on that later). And yet here the verb of treasure becomes useful, for amongst those ‘Objects of Daily Use’ (as Petrie classified them) are numerous personal items which would certainly have been treasured – ‘kept carefully’, and valued’ (see note 4) – by their owners: miniature vases for storing cosmetics, earrings, necklaces, shabtis, votive figurines… So how do we define treasure, and does it really matter?

A selection of objects from Petrie’s Objects of Daily Use:
beads and ear studs (top) and a shabti (bottom).
Image © University College London.

The answer to the latter, in terms of England’s archaeological record, is clearly yes, it does matter. Until it was addressed relatively recently, the legislation of “Treasure Trove” had existed as common law for centuries, originally as a means for the Crown to claim any riches found. The three basic elements are as follows: the found object must be made of, or contain a ‘substantial’ amount of gold or silver; have no known original owner (or heir); and have been buried with the original intention that it would be later recovered (known as animus revertendi).[5] As Cookson suggests, ‘Treasure Trove was conceived long before archaeology gave cultural value to old things, and considers valuables from an essentially financial perspective, not an artistic or historical one’.[6] Indeed, we can see how protection was scant: anything not rendered in precious metal was disqualified, and once an object was precious metal, it had to be proven that someone had buried it with an intent to return to it (quite a task!). In the light of increasing concerns from archaeologists and heritage professionals – fuelled in no small part by the extreme increase in the popularity of metal detecting during the 1970s and ‘80s – the legislation was overhauled in 1996. Enforced on the 24th September 1997, the new Treasure Act set out to ‘abolish treasure trove and to make fresh provision in relation to treasure.’[7]

The definition now covered the following:

  • Any object at least 300 years old, other than a coin, found to contain at least 10% precious metal;
  • All coins at least 300 years old from the same find which number, in the case of base metal coins, more than 10 or, in the case of gold and silver ones, more than 2;
  • Any object of whatever composition found in the same place as, or that had previously been together with, another treasure find;
  • Any object, not falling into the 3 categories above, that would previously have been treasure trove, namely modern coin hoards or similar displaying animus revertendi [8] (see also note 7).

The Crosby Garrett helmet, a Roman helmet
metal-detected in Cumbria and sold at auction
to an anonymous buyer, because it was not
saved for the nation as treasure (see note 11).

Since then, prehistoric base metal assemblages have also been added to the Treasure Act, and the UK’s portable antiquities are better protected than ever.[9] The Portable Antiquities Scheme, a voluntary recording scheme set up in concordance with the Treasure Act has, since 1997, recorded 831,595 objects (both treasure, and otherwise) found outside of archaeological excavation – mostly by metal detectorists.[10] However, many would like to see the treasure legislation tightened further still, especially in light of recent landmark base metal finds, such as the astonishing Crosby Garrett Roman helmet, which was lost to a private vendor on account of its not qualifying as treasure. The fundamental question therefore still remains: what does treasure mean and, more importantly, what will it mean to future generations?

[3] http://www.bbc.co.uk/news/uk-england-stoke-staffordshire-20903152

[4] http://oxforddictionaries.com/definition/english/treasure?q=treasure

[5] N. E. Palmer: ‘Treasure Trove and Title to Discovered Antiquities’, in International Journal of Cultural Property 2, (1993), pp. 275-318.

[6] N. Cookson: ‘Treasure Trove: dumb enchantment or new law?’ in Antiquity 66  (1992), pp. 399-405, (p.401).

[7] Department for Culture Media and Sport: The Treasure Act 1996 Code of Practice (Revised) England and Wales, (2002) London: DCMS (pp.22).

[8] R. Bland: ‘The Treasure Act and the Proposals for the Voluntary Recording of All Archaeological Finds’, in The Museum Archaeologist 23 (Conference Proceedings) (1996), pp.3-18.


From Delphi to the Dodo: Finding Links Between Archaeology and Natural History

By Gemma Angel, on 1 October 2012

by Felicity Winkley






Initially, my response to the challenge of finding a link between my research and the zoological specimens in the Grant Museum was one of dread and panic. Such a thing could simply not be done – it would be impossible to engage a member of the public for long enough to travel the conversational distance from a dissected Thylacine to British archaeology. On closer inspection, however, I was to find that the museum which houses Grant’s collection of some 67,000 zoological specimens, is not, in fact, dissimilar to those great anthropological collections that were also assembled during the 19th century. The shadowy corners and densely-packed glass cases are reminiscent, certainly, of those at the Pitt Rivers Museum in Oxford, where the shelves overflow with ethnological artefacts.

And yet the similarities go beyond the simply aesthetic. Both Robert Edmund Grant ( 1793-1874  – pictured left) and Lieutenant-General Augustus Pitt Rivers (1827-1900 – pictured below) were undoubtedly, if unconsciously, influenced by a long-established tradition of collecting in England, which since the 17th century had been a gentlemanly pursuit acceptable to the social elite [1]. Indeed, for ambitious scholars it was even a method of propelling oneself up the social charts. Elias Ashmole (1617-1692) was the son of a saddler, but with a good eye and some wily investing he was able to accumulate a collection that when bequeathed to Oxford University (along with its own custom-made premises), would provide a lasting legacy to maintain both the collection and his own prestige [1]. But Ashmole was only one of any number of ‘Antiquarians’ as these collectors were soon to become known; men who, for Sweet, “were important actors in that explosion of print and ideas, that thirst for knowledge and understanding with some have called the British Enlightenment” [2].

The rise of the antiquarian popularised the collection of all kinds of objects and artefacts, from coins and medals, to maps and even fossils; the over-arching motivation was simply a thirst for information about the past, and particularly information that was not provided by the historical record. This lack of concern for the ‘what’ that was being studied, often meant that focus was instead placed upon the ‘where’, so that authors would compile an in-depth study of the local parish or county – a regional framework which brought their work into obvious connection with natural historians compiling similar studies. The connection between antiquaries and natural historians was cemented further still by their agreement on epistemological models, and a sympathetic “culture of inquiry” according to Sweet [2].

In order to find a link between my own research and the Grant Museum collections, I determined to find out whether this undeniable spirit of discovery which so connected antiquarians and natural historians during the 17th and 18th century persisted into the 19th century also – and I was very happy to discover that it did. Whilst the methodology had been modernised into a recognisable early archaeology, and the investigative locations had moved from the local county to the more exotic, there was still an undeniable relationship between antiquarian and natural historical research. Just as the history of the local parish had been a relative unknown several hundred years previously, by the 19th century researchers had begun travelling further afield to collect archaeological information alongside samples of foreign flora and fauna. And this is where Darwin comes in.

Charles Darwin (1809-1882) had studied under Robert Grant during the 1820s and was much influenced by his ideas; however, his focus was by no means limited to the comparative anatomical interest they both shared. Written records show that even later on in his career, Darwin was contributing to funding for voyages that would provide evidence for archaeological investigations as well as natural-historical studies. A trip, funded in part by the Royal Society (then The Royal Society of London for Improving Natural Knowledge), to Borneo in 1878, had the archaeological aim of finding evidence for early human occupation, but plainly also had great implications for Darwin and his colleague Alfred Russell Wallace as a potential source for proving the evolution of anthropoid apes [3]. Wallace had already visited Borneo in 1855, where his observation of orangutans native only to that island and neighbouring Sumatra, prompted his composition of the very paper that would inspire Darwin’s Origin of the Species. Darwin pledged a sum of twenty pounds to the voyage [3]. Any discovery, whether made by an archaeologist, anatomist, collector or naturalist, was seen as a contribution to enlightenment. As testament to the limitless horizons of this quest for knowledge, signing off his letter, Darwin adds:

“I wish someone as energetic as yourself [John Evans] would organise an expedition to the triassic lacustrine beds in S. Africa, where the cliffs are said to be almost composed of bones.”

Evidently, he was already planning the next adventure! [3]


[1] Swann, M. (2001) Curiosities and Texts: The Culture of Collecting in Early Modern England Philadelphia: University of Pennsylvania Press

[2] Sweet, R. (2004) Antiquaries: The Discovery of the Past in Eighteenth-Century Britain London: Hambledon and London (pp. xiv)

[3] Sherratt, A. (2002) Darwin among the archaeologists: The John Evans nexus and the Borneo Caves Antiquity 76 pp.151-157

Dem Bones, Dem Bones, Dem Dry Bones … Excavating Memory, Digging up the Past

By Gemma Angel, on 16 July 2012

by Katie Donington





Above all, he must not be afraid to return again and again to the same matter; to scatter it as one scatters earth, to turn it over as one turns over soil. For the ‘matter itself’ is no more than the strata which yield their long-sought secrets only to the most meticulous investigation. That is to say, they yield those images that, severed from all earlier associations, reside as treasures in the sober rooms of our later insights – like torsos in a collector’s gallery.[1]

The Buried on Campus exhibition at the Grant Museum ran from April 23rd to July 13th 2012. Following the 2010 discovery of human remains beneath the Main Quad of UCL, research was undertaken to determine the reason for their presence. Forensic anatomist Wendy Birch and forensic anthropologist Christine King, members of the UCL Anatomy Lab, were able to date the bones which were over a hundred years old. The bones themselves also gave clues to the reason for their presence. Several items had numbers written on them and others displayed signs of medical incisions. This led the team to the conclusion that the bones represented a portion of the UCL Anatomy Collection which had been buried at some point after 1886.

The issue of displaying human remains in a museum of zoology was discussed by Jack Ashby, Grant Museum Manager in a recent blog post:

The whole topic of displaying human remains has to be considered carefully and handled sensitively… One of the questions we asked our visitors last term on a QRator iPad was “Should human and animal remains be treated any differently in museums like this?” and the majority of the responses were in favour of humans being displayed, with the sensible caveats of consent and sensitivity.[2]

The discovery and exhibition of human remains raises interesting questions about the relationship between archaeology, history, science, memory and identity. It also links into debates over the ethics of display in relation to human beings. Who were these people? Why did their bodies end up in an anatomy collection? Did they consent or were they compelled? Is it possible or desirable to attempt to retrieve or reconstruct the object as subject?

The case of the bones buried on campus reminds me of another example in which the physical act of excavation was transformed into an act of historical re-inscription. In 1991, workmen digging the foundations of a new federal building close to Wall Street uncovered the remains of 419 men, women and children. Archaeologists, historians and scientists were called in and they were able to identify the area as a 6.6 acre site used for the burial of free and enslaved Africans by examining maps from the seventeenth and eighteenth centuries.

The Maerschalck Map of 1754, showing the Negro Burial Grounds near the “Fresh Water” (the Collect Pond). Image © The African Burial Ground Project.






The bones offered specific information which helped to give a partial identity to the people interred. Using ‘skeletal biology’[3] it was possible in some cases to pin point where in Africa individuals had come from – Congo, Ghana, Ashanti and Benin, as well as revealing whether they had been transported via the Caribbean. Bone analysis spoke of the appalling conditions of slavery; fractured, broken, malformed and diseased bones articulated stories of unrelenting labour, nutritional deficiency and coercive violence.

Objects found inside some of the burials created a sense of the uniqueness of each person as well as the care taken by loved ones as they performed burial rituals. The lack of items found also indicated the social status of the majority of people buried on the site.

This pendant (image courtesy of the African Burial Ground Project) was recovered from burial 254, a child aged between 3 ½ and 5 ½ years old. It was found near the child’s jaw and may have been either an earring or part of a necklace. The objects and bones represented a visceral historic link to the African American community in New York. The sense of ownership they felt towards this history and the individuals who had emerged from the soil, led to active community engagement in the project. In line with the wishes of the African American community, all original items were facsimiled before being reinterred along with all 419 ancestral remains in a ceremony in 2003. A memorial and museum were also built on the site (see image below, courtesy of the African Burial Ground Project).

The emergence of the skeletons was interpreted by some as a literal rendering of the way in which America has been haunted by its relationship with slavery. As physical anthropologist Michael Blakely, who worked on the site explained; ‘with the African Burial Ground we found ourselves standing with a community that wanted to know things that had been hidden from view, buried, about who we are and what this society has been.’[4]

The context of the two sites is of course very different. However, a comparison of them does raise questions about the uses of human remains and their relationship to history, memory and identity. The bones at UCL formed part of an anatomical teaching collection; a composite of individuals whose bodies somehow became the property of medical institutions. Those people often consisted of those on the margins of society; the poor, the criminal and the exoticised ‘others’ of empire.[5] Debates over the repatriation of human remains in museum collections highlight their importance to people’s sense of identity and history. Without family or community groups to claim the individuals discovered at UCL, it seems that they are destined to remain object rather than subject – ‘severed from all earlier associations… torsos in a collector’s gallery’.











Have your say – what do you think should happen to the bones at UCL?

[1] Walter Benjamin, ‘Excavation and Memory’, in Selected Writings, Vol. 2, Part 2 (1931–1934),ed. by Marcus Paul Bullock, Michael William Jennings, Howard Eiland, and Gary Smith, (Massachusetts, Belknap Press of Harvard University Press, 2005), p. 576.

[2] http://blogs.ucl.ac.uk/museums/2012/04/24/buried-on-campus-has-opened/

[3] http://www.archaeology.org/online/interviews/blakey/

[4] http://www.archaeology.org/online/interviews/blakey/

[5] Sadiah Qureshi, ‘Displaying Sara Baartman, The Hottentot Venus’, History of Science, Volume 42 (2004), pp.233-257.