Archive for the 'Stacy Hackner' Category

Five years of research: a summary

By Stacy Hackner, on 3 July 2017


by Stacy Hackner

A PhD often feels like an unrewarding process. There are setbacks, data failures, non-significant results, and a general lack of the small successes that (I hear) make general worklife pleasant: “I got that promotion!” “Everyone applauded my presentation!” “I moved to the desk near the window!” PhD life is one giant slog until the end, a nerve-wracking hours-long session where you’re grilled by the only people who know more about your field than you.

I survived.

Hopefully some of you have been following my research here, starting from astronauts and moving on to runners and foraging patterns. It all ties together, I promise. I recently gave a talk at the Engagers’ event “Materials & Objects” summarizing my research, which I can now tell you about in its full glory! I’m pleased to announce: I had significant findings.

The lowdown is that (as expected) there are differences in the shape of the tibia (shin bone) between nomads and farmers in Sudan. Why would this be? Well, if you’ve been following along, bones change shape in response to activity, particularly activities performed during adolescence. The major categories of tibial shape were those that indicated long-distance walking, doing activity in one place, and doing very little activity. Looking at the distribution, the majority of the nomadic males had the leg shape indicating long-distance walking, and some of the agricultural males had the long-distance shape and others had the staying-in-place shape. This makes sense considering the varying types of activity performed in an agricultural society, particularly one that also had herds to take care of: some individuals would be taking the herds up and down along the Nile to find grazing land while others stayed local, tending farms. While it’s unclear how often a nomadic group needs to move camp to be considered truly nomadic, in this case it seems like they were walking a lot – enough to compare their tibial shape to that of modern long-distance runners. These differences in food acquisition are culturally-adapted responses to differing environments: the nomads live in semi-arid grassland and can travel slowly over a large area to graze sheep and cattle, while the farmers are constrained to a narrow strip of fertile land along the Nile banks, limiting how many people can move around, and how often.

Perhaps the most important finding is the difference between males and females. In addition to looking at shape, I also conducted tests to show how strong each bone is regardless of shape, a result called polar second moment of inertia (and shortened to, unexpectedly, J). The males at each site had higher values for J – thus, stronger bones – than the females. However, the nomadic females had higher J values than some of the males at the agricultural sites! This is in spite of most females from both sites having the tibial shape indicating “not very much activity”. This shape may be the juvenile shape of the tibia, which females have retained into adulthood despite performing enough activity to give them higher strength values than male farmers. Similar results have actually been noted in studies examining different time periods – for instance, the Paleolithic to Neolithic – and found much more similarity between females than between males. Researchers often interpret this as evidence of changing male roles but female roles remaining the same, which strikes me as unlikely considering the time spans covered. I instead conclude that females build bone differently in adolescence, and perhaps there are subtleties in bone development that don’t reveal themselves as differences in shape. As females have lower rates of testosterone, which builds bone as well as muscle, they may have to work harder or longer than males to attain the same bone shape and strength. I’m using this to argue that the roles of women in archaeological societies – particularly nomadic ones – have been unexamined in light of biological evidence.

Of course, the best conclusion for a PhD is a call for more research, and mine is that we need to examine male and female adolescent athletes together to see when exactly shape change occurs. If we can pin down the amount of activity necessary for women to have bones as strong as those of their male peers, we can more accurately interpret the types of activities ancient people were performing without devaluing the work of women.

My examiners found all this enthralling, and I’m pleased to say I passed! The work of this woman is valued in the eyes of the academe.

A History of Legs in 5 Objects

By Stacy Hackner, on 11 April 2017

DSC_0745by Stacy Hackner

My research focuses on the tibia, the largest bone in the lower leg. You probably know it as the shin bone, or the one that makes frequent contact with your coffee table resulting in lots of yelling and hopping around; that’s why footballers often wear shinguards. The intense pain is because the front of the tibia is a sharp crest that sits directly beneath the skin. There are a lot of leg-related objects in UCL Museums, so here’s a whirlwind tour of a few of them!

One of the few places you can see a human tibia is the Petrie’s pot burial. This skeleton from the site of Badari in Egypt has rather long tibiae, indicating that the individual was quite tall. The last estimation of his height was made in 1985, probably using regression equations based on the lengths of the tibia and femur (thigh bone): these indicated that he was almost 2 meters tall. However, the equations used in the 80s were based on a paper from 1958, which used bone lengths from Americans who died in the Korean War. There are two problems that we now know of with this calculation: height is related to genetics and diet, and different populations have differing limb length ratios.

Pot burial from Hemamieh, near the village of Badari UC14856-8

Pot burial from Hemamieh, near the village of Badari UC14856-8

The Americans born in the 1930s-40s had a vastly different diet from predynastic Egyptians, and the formulae were developed for (and thus work best when testing) white Americans. This is where limb length ratios come into play. Some people have short torsos and long legs, while others have long torsos and short legs. East Africans tend to have long legs and short torsos, and an equation developed for the inverse would result in a height much taller than he actually was! Another thing to notice is the cartilage around the knee joint. At this point in time, the Egyptians didn’t practice artificial mummification – but the dry conditions of the desert preserved some soft tissue in a process called natural mummification. Thus you can see the ligaments and muscles connecting the tibia to the patella (knee cap) and femur.

The Petrie also has a collection of ancient shoes and sandals. I think the sandals are fascinating because they show a design that has obviously been perfected: the flip flop. One of my favorites is an Amarna-period child’s woven reed sandal featuring two straps which meet at a toe thong. The flip flop is a utilitarian design, ideal for keeping the foot cool in the heat and protecting the sole of the foot from sharp objects and hot ground surfaces. These are actually some of the earliest attested flip flops in the world, making their appearance in the 18th Dynasty (around 1300 BCE).

An Egyptian flip-flop. UC769.

An Egyptian flip-flop. UC769.

Another shoe, this time from the site of Hawara, is a closed-toe right leather shoe. Dating to the Roman period, this shows that flip flops were not the only kind of shoe worn in Egypt. This shoe has evidence of wear and even has some mud on the sole from the last time it was worn.  This shoe could have been worn with knit wool socks, one of which has been preserved. However, the Petrie Collection’s sock has a separate big toe, potentially indicating that ancient Egyptians did not have a problem wearing socks and sandals together, a trend abhorrent to modern followers of fashion (except to fans of Birkenstocks).

Ancient Egyptian shoe (UC28271) and sock (UC16767.

Ancient Egyptian shoe (UC28271).

sock UC16767

Ancient Egyptian sock (UC16767).

The Grant Museum contains a huge number of legs, but only one set belonging to a human. For instructive purposes, I prefer to show visitors the tibiae of the tiger (Panthera tigris) on display in the southwest corner of the museum. These tibiae show a pronounced muscle attachment on the rear side where the soleus muscle connects to the bone. In bioarchaeology, we score this attachment on a scale of 1-5, where 5 indicates a really robust attachment. The more robust  – attachment, the bigger the muscle; this means that either the individual had more testosterone, which increases muscle size, or they performed a large amount of activity using that muscle. (We wouldn’t score this one because it doesn’t belong to a human.) In humans, this could be walking, running, jumping, or squatting. Practice doing some of these to increase your soleal line attachment site!

The posterior tibia of a tiger.

The posterior tibia of a tiger.

Moving to the Art Museum, we can see legs from an aesthetic rather than practical perspective. A statue featuring an interesting leg posture the legs is “Spinario or Boy With Thorn”, a bronze statue produced by Sabatino de Angelis & Fils of Naples in the 19th century. It is a copy of a famous Greco-Roman bronze, one of very few that has not been lost (bronze was frequently melted down and reused). The position of the boy is rather interesting: he is seated with one foot on the ground and the opposite foot on his knee as he examines his sole to remove a thorn. This is a very human position, and shows the versatility of the joints of the hip, knee, and ankle. The hip is adducted and outwardly rotated, the knee is flexed, and the ankle is everted. It’s rare for the leg to be shown in such a bent position in art, as statues usually depict humans standing or walking.

Spinario, or Boy With Thorn.

Spinario, or Boy With Thorn.

Bipedalism, or walking on two legs, is one of the traits we associate with being human. It’s rare in the animal world. Hopefully next time you look at a statue, slip on your flip flops, or go for a jog, you’ll think of all the work your tibiae are doing for you – and keep them out of the way of the coffee table.

(OK, I know that was six objects… but imagine the sock inside the shoe!)

Neanderthals: Not So Different?

By Josephine Mills, on 4 April 2017

Although opinions of Neanderthals are rapidly changing within academic research groups, their image as primitive, brutish, and violent, can still be pervasive in wider spread media. This division between Homo sapiens and Neanderthals has deep roots in Europe, exacerbated by the historic tendency to see Anatomically Modern Humans (H. sapiens) as the only behaviourally complex hominin species. The first recognised Neanderthal fossil was discovered in 1856 in the Neander Valley in Germany and rapidly prompted widespread chaos in the scientific community as to where it fitted within the hominin lineage.

Much of this dialogue focused on perceived ‘primitive’ features of Neanderthal anatomy highlighting skeletal differences such as large protruding brow ridges, shorter stature, and barrel-like rib cages (if you visit the Grant Museum a selection of hominin crania are displayed showing some of these differences!). Discussion also focused on disparities in cognitive capacity and behaviour, quickly restricting Neanderthals to a species who favoured hunting over culture, and were more likely to display violence than altruism.

My PhD is based on unravelling aspects of Neanderthal landscape use and migration in the Western English Channel region during the Middle Palaeolithic, a period stretching from around 400 – 40,000 years ago. I am exploring behavioural complexities and reactions to environmental change through Neanderthal material culture, mainly via studying the movement of stone tools. Therefore it isn’t surprising that when I am engaging in the Grant Museum I gravitate towards the Neanderthal cast, which is a replica of the famous skull excavated from the site of La Chapelle-Aux-Saints in France.


Figure 1: La Chapelle-Aux-Saints Neanderthal cast held at the Grant Museum—note the pronounced brow ridge over the eye sockets. Although the mandible and teeth look very different from Anatomically Modern Humans this is a cast taken from the skull of a particularly old individual who had advanced dental problems including gum disease! (Grant Museum, z2020)

Interestingly the most common theme in conversations I have with visitors to the Grant Museum is the shared similarities, rather than differences, between Anatomically Modern Humans and Neanderthals. It seems that what captures our imaginations now are the significance of concepts previously thought of as unique to Homo sapiens that are being gradually recognised in association with Neanderthals. Important advances in dating and DNA analysis have shown that Neanderthals and Anatomically Modern Humans co-existed in Europe for at least 40,000 years, with population groups meeting and interacting at different times. This is seen both in the archaeological record but also in the sequencing of the Neanderthal genome, which indicates that most modern people living outside of Africa inherited around 1-4% of their DNA from Neanderthals. As I mentioned, after the discovery of the first Neanderthal fossils people weren’t too keen on any evidence that threatened to topple the shiny pedestal reserved for Homo sapiens, however these advances in modern science have prompted a greater openness when exploring Neanderthal archaeology.

In order to investigate these aspects of complex behaviour, such as symbolism and art, we consider behaviours preserved in the archaeological record that appear to surpass the functional everyday need for survival. Recent discoveries have suggested that Neanderthals were making jewellery from eagle talons in Croatia and may have had more involvement than previously thought in the complex archaeological assemblages found at sites like Grotte du Renne. However evidence of these behaviours in Neanderthal populations remains rare and although this may relate to the historic viewpoint (it simply hasn’t been looked for…), empirically we just do not see it on the same scale.

Two examples I often refer to when discussing this at the museum are the recent discoveries of potential abstract art at Gorham’s Cave Gibraltar and the Neanderthal structures found underground at Bruniquel Cave. The abstract art (disclaimer: I understand that ‘art’ depends on the definition of the concept itself but that’s for another blog post!) was found at Gorham’s Cave in Gibraltar, a well-known Neanderthal occupation site. Often nicknamed ‘the hashtag’ it is a series of overlapping lines that appear to have been made deliberately by repeated cutting motions using a stone tool. The archaeologists who discovered the hashtag suggest that it was created around 40,000 years ago and that, as it was found underlying Neanderthal stone tools, it can definitely be attributed to them. They hail it as an example of Neanderthal abstract art that may even have represented a map, suggesting an elevated level of conceptual understanding. Whatever the marks represent, if they are associated with the Neanderthal occupation of the cave this is a behaviour that has not been observed elsewhere!


Figure 2: An image of the Neanderthal ‘hashtag’ made deliberately with repeatedly with strokes of a stone tool on a raised podium in Gorham’s Cave Gibraltar (Photo: Rodríguez-Vidal et al. 2014)

The other example that I mentioned is the site of Bruniquel Cave in southwest France, where unusual underground structures deliberately made from stalagmites have been dated via uranium series to 176,000 years old. This date firmly places the creation of the structures in a time where Neanderthals were the sole occupants of the region. The structures themselves are circular in diameter and are composed of fragmented stalagmites (all of a similar length c.34cm) with evidence of deliberately made fire. The function of these structures is not immediately obvious but as there is a distinct lack of other archaeological material in the cave it is unlikely they were used for domestic purposes. Equally their potential for functioning as shelters is unclear as they are located a whopping 336 metres from the cave entrance in an area that would not have faced the elements.

For me this location deep within the cave presents one of the key implications for Neanderthal behaviour in that no natural light whatsoever would have reached the chamber! This indicates a degree of familiarity with the subterranean world and potentially hints at the symbolic or ritual significance of the cave. Whatever the purpose of the structures, the authors of the study conclude that they represent unique evidence of the use of space, which may reflect the complex social structures of the Neanderthals who built there.


Figure 3: A schematic of the circular structures made with stalagmites deep underground in Bruniquel Cave, the orange colouration shows the areas of deliberate burning (Photo: Jaubert et al. 2016)

The inferences that are made from these Neanderthal finds are carefully considered by both the researchers concerned and the general archaeological community, disseminating the evidence and evaluating what archaeological information can be drawn from it. Overall there is something undeniably privileged to be working in a time where the complexity of Neanderthals is recognised and the potential for art, symbolism and other human characteristics is discussed!


Green, R.E., Krause, J., Briggs, A.W., Maricic, T., Stenzel, U., Kircher, M., Patterson, N., Li, H., Zhai, W., Fritz, M.H.Y. and Hansen, N.F. 2010. A draft sequence of the Neandertal genome. Science 328 (5979), 710-722

Jaubert, J., Verheyden, S., Genty, D., Soulier, M., Cheng, H., Blamart, D., Burlet, C., Camus, H., Delaby, S., Deldicque, D. and Edwards, R.L. 2016. Early Neanderthal constructions deep in Bruniquel Cave in southwestern France. Nature534 (7605), 111-114

Radovčić, D., Sršen, A.O., Radovčić, J. and Frayer, D.W. 2015. Evidence for Neandertal jewelry: modified white-tailed eagle claws at Krapina. PloS one 10 (3), p.e 0119802.

Rodríguez-Vidal, J., d’Errico, F., Pacheco, F.G., Blasco, R., Rosell, J., Jennings, R.P., Queffelec, A., Finlayson, G., Fa, D.A., López, J.M.G. and Carrión, J.S., 2014. A rock engraving made by Neanderthals in Gibraltar. Proceedings of the National Academy of Sciences 111 (37), 13301-13306.

Sports in the Ancient World

By Stacy Hackner, on 24 January 2017


by Stacy Hackner


I’ve written previously here about the antiquity of running, which was one of the original sports at the ancient Greek Olympics, along with javelin, archery, and jumping. These games started around 776 BC in the town of Olympia. What came before, though? What other evidence do we have of ancient sports?

Running is probably the most ancient sport; it requires no gear (no matter how much shoe companies make you think you need it) and the distances are easily set: to that tree and back, to that mountain and back. Research into the origins of human locomotion focus on changes to the foot, which needed to change from arboreal gripping to bipedal running and bearing the full weight of the body. A fossil foot of Ardipithecus ramidus, a hominin which lived 4.4 million years ago, features a stiffened midfoot and flexible toes capable of being extended to help push off at the end of a stance, but has the short big toe typical of great apes. Australopithecus sediba, which lived only 2 million years ago, had an arched foot like modern humans (at least not the flat-footed ones) but an ankle that turned inwards like apes. Clearly our feet didn’t evolve all the features of bipedal running at once, but rather at various intervals over the past 4-5 millennia. Evidence of ancient humans’ distance running is equally ancient, as I wrote about previously. Researchers Bramble & Lieberman have posed the question “Why would early Homo run long distances when walking is easier, safer and less costly?” They posit that endurance running was key to obtaining the fatty tissue from meat, marrow, and brain necessary to fuel our absurdly large brains – thus linking long-distance running with improved cognition. In a similar vein, research into the neuroscience of running has found that it boosts mood, clarifies thinking, and decreases stress.

Feats of athleticism in ancient times were frequently dedicated to gods. Long before the Greek games, the Egyptians were running races at the sed-festival dedicated to the fertility god Min. A limestone wall block at the Petrie depicts King Senusret (1971 BCE) racing with an oar and hepet-tool. The Olympic Games, too, were originally dedicated to the gods of Olympus, but it appears that as time went on, they became corrupted by emphasizing the individual heroic athletes and even allowed commoners to compete. There were four races in the original Olympics: the stade (192m), 2 stades, 7-24 stades, and 2-4 stades in full hoplite armor. It should be mentioned that serious long-distance running, like the modern marathon, was not a part of the ancient games. The story of Pheidippides running from the battlefield at Marathon to announce the Greek victory in Athens is most likely fictional, although the first modern marathon in 1896 traced that 25-mile route. The modern distance of just over 26 miles was set at the 1908 London Olympics, when the route was lengthened to go past Buckingham Palace.

Limestone wall-block with sunk relief depiction, internally carefully modelled, showing King Senusret I with oar and hepet-tool, running the sed-festival race before the god Min. Now in five pieces rejoined, and some small fragments. Courtesy Petrie Museum.

Limestone wall-block showing King Senusret I running the sed-festival race before the god Min. Courtesy Petrie Museum.

Wrestling might be equally ancient. It’s basically a form of play-fighting with rules (or without rules, depending on the type – compare judo to Greco-Roman to WWF), and play-fighting can be seen not only in human children but in a variety of mammal species. In Olympic wrestling, the goal was to get one’s opponent to the ground without biting or grabbing genitals, but breaking their fingers and dislocating bones were valid. Some archaeologists have tried to attribute Nubian bone shape – the basis of my thesis – on wrestling, for which they were famed. Another limestone relief in the Petrie shows two men wrestling in loincloths. Boxing is a similar fighting contest; original Olympic boxing required two men to fight until one was unconscious. Pankration brutally combined wrestling and boxing, but helpfully forbid eye-gouging. It may be possible to identify ancient boxers bioarchaeologically by examining patterns of nonlethal injuries. Some of these are depressions in the cranial vault (particularly towards the front and the left, presuming mostly right-handed opponents), facial fractures, nasal fractures, traumatic tooth loss, and fractures of the bones of the hand.

Crude limestone group, depicting two men wrestling. Traces of red loin cloth on one, and black on the other. Courtesy Petrie Museum.

Crude limestone group depicting two men wrestling. Traces of red loin cloth on one, and black on the other. Courtesy Petrie Museum.

Spear or javelin throwing has also been attested in antiquity. Although we have evidence of predynastic flint points and dynastic iron spear tips, it’s unclear whether these were used for sport (how far one can throw) or for hunting. Actually, it’s unclear how the two became separate. Hunting was (and continues to be) a major sport – although not one with a clear winner as in racing or wrestling – and the only difference is that in javelin the target isn’t moving (or alive). In the past few years, research has been conducted into the antiquity of spear throwing. One study argues that Neanderthals had asymmetrical upper arm bones – the right was larger due to the muscular activity involved in repeatedly throwing a spear. Another study used electromyography of various activities to reject the spear-thrusting hypothesis, arguing that that the right arm was larger in the specific dimensions more associated with scraping hides. Spear throwing is attested bioarchaeologically in much later periods. A particular pathological pattern called “atlatl elbow”: use of a tool to increase spear velocity caused osteoarthritic degeneration of the elbow, but protected the shoulder.

Fragment of a copper alloy spear head from the Roman period. Courtesy Petrie Museum.

Fragment of a Roman-period copper alloy spear head. Courtesy Petrie Museum.

A final Olympic sport is chariot racing and riding. Horses were probably only domesticated around 5500 years ago in Eurasia, so horse sports are really quite new compared to running and throwing! It’s likely that horses were originally captured and domesticated for meat at least 1000 years before humans realized they could use them for transportation. The Olympic races were 4.5 miles around the track (without saddles or stirrups, as these developments had not yet reached Greece), and the chariot races were 9 miles with either 2 or 4 horses. Bioarchaeologists have noted signs of horseback riding around the ancient world – signs include degenerative changes to the vertebrae and pelvis from bouncing as well as enlargement of the hip socket (acetabulum) and increased contact area between the femur and pelvis from when they rub together. In all cases, more males than females had these changes, indicating that it was more common for men to ride horses.

Of course, there are many more sports that existed in the ancient world – other fighting games including gladiatorial combat, ritualized warfare, and games with balls and sticks (including the Mayan basketball-esque game purportedly played with human skulls). Often games were dedicated to gods, or resulted in the death of the loser(s). However, many of these, explored bioarchaeologically, would result in similar musculoskeletal changes and injury patterns discussed above. Many games have probably been lost to history. Considering the vast span of human activity, it’s likely sports of some kind have always existed, from the earliest foot races to the modern Olympic spectacle.

Stone ball, limestone; from a game. From Naqada Tomb 1503. Courtesy Petrie Museum.

Limestone ball from a game. From Naqada Tomb 1503. Courtesy Petrie Museum.


Bramble, D.M. and Lieberman, D.E. 2004. Endurance running and the evolution of Homo. Nature 432(7015), pp. 345–352.

Carroll, S.C. 1988. Wrestling in Ancient Nubia. Journal of sport history 15(2), pp. 121–137. Available at:

Larsen, C.S. 2015. Bioarchaeology: Interpreting Behavior from the Human Skeleton. Cambridge: Cambridge University Press.

Lieberman, D.E. 2012. Those feet in ancient times. Nature 483, pp. 550–551.

Martin, D.L. and Frayer, D.W. eds. 1997. Troubled Times: Violence and Warfare in the Past. illustrated. Psychology Press.

Perrottet, T. 2004. The Naked Olympics: The True Story of the Ancient Games. Random House Publishing Group.


Normativity November: Defining the Archaeological Normal

By Stacy Hackner, on 23 November 2016

This post is part of QMUL’s Normativity November, a month exploring the concept of the normal in preparation for the exciting Being Human events ‘Emotions and Cancer’ on 22 November and ‘The Museum of the Normal’ on 24 November, and originally appeared on the QMUL History of Emotions Blog.

DSC_0745by Stacy Hackner


The history of archaeology in the late 19th and early 20th centuries can be read as the history of European men attempting to prove their perceived place in the world. At the time, western Europe had colonized much of the world, dividing up Africa, South America, and Oceania from which they could extract resources to further fund empires. Alongside this global spread was a sincere belief in the superiority of the rule of white men, which had grown from the Darwinian theory of evolution and the subsequent ideas of eugenics advanced by Darwin’s cousin Francis Galton: not only were white men the height of evolutionary and cultural progress, they were the epitome of thousands of years of cultural development which was superior to any other world culture. According to their belief, it was inevitable that Europeans should colonize the rest of the world. This was not only the normal way of life, but the only one that made sense.

In modern archaeology, we let the data speak for itself, trying not to impose our own ideas of normality and society onto ancient cultures. One hundred years ago, however, archaeology was used as a tool to prove European superiority and cultural manifest and without the benefit of radiocarbon dating (invented in the 1940s) to identify which culture developed at what time, Victorian and Edwardian archaeologists were free to stratify ancient cultures in a way that supported their framework that most European=most advanced. “European-ness” was defined through craniometry, or the measurement and appearance of skulls, and similar measurements of the limbs. Normality was defined as the average British measurement, and any deviation from this normal immediately identified that individual as part of a lesser race (a term which modern anthropologists find highly problematic, as so much of what was previously called “race” is culture).

In my research into sites in Egypt and Sudan, I’ve encountered two sites that typify this shoehorning of archaeology to fit a Victorian ideal of European superiority. The first is an ancient Egyptian site called Naqada, excavated by Sir William Matthew Flinders Petrie in the 1890s. Petrie is considered the founder of modern, methodological archaeology because he invented typology – categorizing objects based on their similarity to each other. As an associate and friend of Galton and others in the eugenics circle, he applied the same principle to categorizing people (it’s likely that his excavations of human remains were requested by Galton to diversify his anthropometric collection). Naqada featured two main types of burials: one where the deceased were laid on their backs (supine) and one where the deceased were curled up on their side (flexed). Petrie called these “Egyptian” and “foreign” types, respectively. The grave goods (hand-made pottery, hairpins, fish-shaped slate palettes) found in the foreign tombs did not resemble any from his previous Egyptian excavations. The skeletons were so markedly different from the Egyptians – round, high skulls of the “Algerian” type, and tall and rugged – that he called them the “New Race”. Similarities, such as the burnt animal offerings found in the New Race tombs, present in Egyptian tombs as symbolic wall paintings, were obviously naïve imitations made by the immigrants. However, the progression of New Race pottery styles pointed to a lengthy stay in Egypt, which confused Petrie. Any protracted stay among the Egyptians must surely have led to trade: why then was there an absence of Egyptian trade goods? His conclusion was that the New Race were invading cannibals from a hot climate who had completely obliterated the local, peaceful Egyptian community between the Old and Middle Kingdoms.

Of course, with the advent of radiocarbon dating and a more discerning approach to cultural change, we now know that Petrie had it backwards. The New Race are actually a pre-Dynastic Egyptian culture (4800-3100 BC), who created permanent urban agricultural settlements after presumably thousands of years of being semi-nomadic alongside smaller agricultural centres. Petrie’s accusation of cannibalism is derived from remarks from Juvenal, a Roman poet writing centuries later. It also shows Petrie’s racism – of course these people from a “hot climate” erased the peaceful Egyptians, whose skulls bear more resemblance to Europeans. In actuality, Egyptian culture as we know it, with pyramids and chariots and mummification, developed from pre-Dynastic culture through very uninteresting centuries-long cultural change. Petrie’s own beliefs about the superiority of Europeans, typified by the Egyptians, allowed him to create a scientific-sounding argument that associated Africans with warlike-invasion halting cultural progression.

The second site in my research is Jebel Moya, located 250 km south of the Sudanese capital of Khartoum, and excavated by Sir Henry Wellcome from 1911-1914. The site is a cemetery that appears to be of a nomadic group, dating to the Meroitic period (3rd century BC-4th century AD). The site lacks the pottery indicative of the predominant Meroitic culture, therefore the skulls were used to determine racial affiliation. Meroe was seen as part of the lineage of ancient Egypt – despite being Sudanese, the Meroitic people adopted pyramid-building and other cultural markers inspired by the now-defunct Egyptian civilization. Because many more female skeletons were discovered at this site than male, one early hypothesis was that Jebel Moya was a pagan and “predatory” group that absorbed women from southern Sudanese tribes either by marriage or slavery and that, as Petrie put it, it was “not a source from which anything sprang, whether culture or tribes or customs”. Yet, the skulls don’t show evidence of interbreeding, implying that they weren’t importing women, and later studies showed that many of the supposed female skeletons were actually those of young males. This is another instance of British anthropologists drawing conclusions about the ancient world using their framework of the British normal. If the Jebel Moyans weren’t associating themselves with the majority Egyptianized culture, they must be pagan (never mind that the Egyptians were pagan too!), polygamous, and lacking in any kind of transferrable culture; in addition, they must have come from the south – that is, Africa.

Sir Henry Wellcome at the Jebel Moya excavations Credit: Wellcome Library, London.

Sir Henry Wellcome at the Jebel Moya excavations
Credit: Wellcome Library, London.

These sites were prominent excavations at the time, and the skeletons went on to be used in a number of arguments about race and relatedness. We now know – as the Victorian researchers reluctantly admitted – that ruggedness of the limbs is due to activity, and that a better way to examine relatedness is by examining teeth rather than skulls. However, the idea of Europeans as superior, following millennia of culture that sprung from the Egyptians and continued by the Greeks and Romans, was read into every archaeological discovery, bolstering the argument that European superiority was normal. Despite our focus on the scientific method and attempting to keep our beliefs out of our research, I wonder what future archaeologists will find problematic about current archaeology.


Addison, F. 1949. Jebel Moya, Vol I: Text. London: Oxford University Press.

Baumgartel, E.J. 1970. Petrie’s Naqada Excavation: A Supplement. London: Bernard Quaritch.

Petrie, W.M.F. 1896. Naqada and Ballas. Warminster: Aris & Phillips.

Stress in Non-Human Animals

By Stacy Hackner, on 14 October 2015

DSC_0745This post is associated with our exhibit Stress: Approaches to the First World War, open October 12-November 20.

By Stacy Hackner


A pig’s skull may not be the first thing that comes to mind when thinking of stress. You may not think of non-human animals at all. However, humans are not the only animals that experience stress and related emotions. Many of the behaviors associated with human psychological disorders can be seen in domestic animals. Divorced from the dialogue of consciousness and cognition, animals have been seen exhibiting symptoms of depression, mourning, and anxiety. Wild animals in captivity ranging from elephants to wolves have exhibited signs of post-traumatic stress disorder; this is also an argument for why orcas in captivity suddenly turn violent. According to noted animal behaviorist Temple Grandin, animals that live in impoverished environments or are prevented from performing natural behaviors develop “stereotypic behaviors” such as rocking, pacing, biting the bars of their enclosure or themselves, and increased aggression. Many of these bear similarities to individuals with a variety of psychological conditions, and (most interestingly) when given psychopharmaceuticals, the behaviors cease.

The First World War unleashed horrors on human soldiers, resulting in shell shock (now called PTSD). However, many animals were also used, including more than one million horses on the Allied side, mostly supplied by the colonies – but 900,000 did not return home. Mules and donkeys were also used for traction and transport, and dogs and pigeons were used as messengers. (Actually, the Belgians used dogs to pull small wagons.) Since the advent of canning in the 19th century, armies no longer had to herd their food along, but apparently the Gloucestershire Regiment brought along a dairy cow to provide fresh milk, although she may have served as a regimental mascot as well – some units kept dogs and cats too.

Horses in gas masks. Sadly, they often confused these with feed bags and proceeded to eat them. Credit Great War Photos.

Horses in gas masks. Sadly, they often confused these with feed bags and proceeded to eat them. Credit Great War Photos.

The RSPCA set up a fund for wounded war horses and operated field veterinary hospitals. They treated 2.5 million animals and returned 85% of those to duty. 484,143 British large animals were killed in combat, which is roughly half the number of British soldiers killed. Estimates place the total number of horses killed at around 8 million.

The horses in particular had a strong impact on the soldiers. Researcher Jane Flynn points out that a positive horse-rider relationship was imperative for both on the battlefield. She cites a description of the painting Goodbye Old Man:

“Imagine the terror of the horse that once calmly delivered   goods   in   quiet   suburban   streets   as, standing hitched to a gun­carriage amid the wreck and ruin at the back of the firing line, he hears above and all around him the crash of bursting shells. He starts, sets his ears back, and trembles; in his wondering eyes is the light of fear. He knows nothing of duty, patriotism, glory, heroism, honour — but he does know that he is in danger.”

"Goodbye, Old Man" used in a poster. Credit RSPCA.

“Goodbye, Old Man” used in a poster. Credit RSPCA.

Historical texts tend to consider horses and other animals used in war as equipment secondary to humans, and even the RSPCA only covers their physical health. Horses don’t only have relationships with their riders, but with the other horses nearby and with the environment. They can easily be frightened by loud noises, not to mention explosions, ground tremors from trench cave-ins, and other things that scared humans sharing their situation. Many horse owners (many pet owners, in fact) argue that their horses have and express human-like emotions. Even if we can’t verify this scientifically, we can observe that horses experience fear, rage, confusion, gain, loss, happiness and sadness. Grandin argues that horses have the capacity to experience and express these simple emotions as well as recall and react to past experiences, but are unable to rationalize these emotions: they simply feel. It’s impossible to say whether that makes it more frightening for a horse or a human to wade through a field of dead comrades. In Egypt, I took a horse ride around the pyramids. The trail led us through what turned out to be an area of the desert where stable owners execute their old horses, resulting in a swath of rotting corpses. I was shocked, and my horse displayed all the signs of fear: ears pinned back, wide eyes, tensed muscles. He recovered after we’d left the area, but I wondered what psychological impact having that experience day after day would cause. If they are able to remember frightening experiences, they might be able to experience post-traumatic stress and be as shell-shocked as the returning soldiers. British soldiers reported that well-bred horses experienced more “shell-shock” than less-pedigreed stock, bolting, stampeding, and going berserk on the battlefield – all typical behaviors of horses under duress, – but did not elaborate on the long-term consequences of this behavior. It would be interesting to explore accounts of horses that survived the war (and were returned to their original owners instead of being sold in Europe or slaughtered) to see whether they exhibited more stereotypical behaviors of stress and shell-shock just as human soldiers did.



Thanks to Anna Sarfaty for advice.

Animals in World War One. RSPCA.org.

Bekoff, Mark. Nov 29, 2011. Do wild animals suffer from PTSD and other psychological disorders? Psychology Today (online).

Flynn, Jane. 2012. Sense and sentimentality: a critical study of the influence of myth in portrayals of the soldier and horse during World War One. Critical Perspectives on Animals in Society: Conference Proceedings.

Grandin, Temple and Johnson, Catherine. 2005. Animals in Translation: Using the Mysteries of Autism to Decode Animal Behavior. New York: Scribner.

Shaw, Matthew. ND. Animals and war. British Library Online: World War One. 

Tucker, Spencer C. (ed.) 1996. The European Powers in the First World War: An Encyclopedia. New York: Garland.

Question of the Week:

Why can’t I touch museum objects?

By Stacy Hackner, on 19 August 2015

DSC_0745By Stacy Hackner

For humans, touch is an important way to gain information about an object. We can tell if something is soft or hard, heavy or light, smooth or rough or fluffy, pliable, sharp, irregular. During my masters class on human dentition, I learned to identify teeth by touch to get around visual biases. We spent a significant amount of time touching objects in our environment, so we tend to get angry when museums tell us not to touch the objects.

I understand the desire to touch a piece of history. There’s a feeling of authenticity you get from holding something made by ancient people, and a sense of disappointment if you’re told the artifact is actually a replica. A British Museum visitor commented that “It was just lovely to know that you could pick something up that was authentic. It was just lovely to put your hands on something.” Another said “You do think sometimes when you’re looking in the cases, sometimes I’d like to pick that up and really look closely.”[i]

Even with “no touching” signs, museum visitors continue to touch things. Sometimes it’s by accident and sometimes they get a sneaky look on their faces, knowing they’re ignoring the signs; most often, they don’t realize what they’re doing is damaging the object.

Passive conservation of an object involves creating a stable environment so that the object can continue its “life” undisturbed. Sudden changes in humidity, temperature, and light can degrade the object. Touching it introduces dirt and oils from your skin onto its surface – the same way you’d leave fingerprints at a crime scene. Additionally, the oils can then attract dirt to linger, and acidic oils can also degrade metallic surfaces.

Yes, museum professionals handle objects for research purposes. However, we attempt to handle them as little as possible with clean hands and wear gloves when appropriate. This difference between museum staff and the public is also one of quantity: it’s ok if one person does it occasionally, but if everyone touches it on every visit, the grime adds up. In 2009, the Ashmolean Museum in Oxford introduced a “touchometer” that counts how many people have touched an object made of various materials. As you can see in the image below, after nearly 8 million touches, the left half of the object is severely degraded. The stone (centre) has developed a patina, the metal (bottom) has become shiny, and the cloth (left) has entirely worn away. (Also, people have scratched the frame.)

The Ashmolean's Touchometer. Thanks to Mark Norman.

The Ashmolean’s Touchometer. Thanks to Mark Norman, the Ashmolean’s Head of Conservation.

If you walk through the British Museum’s gallery of Egyptian statuary, you can clearly see the areas on artifacts that people like to touch – the corners and public-facing edges of sarcophaguses are darker than the wall-facing edges, and anything round and protruding tends to have a sheen that takes years of painstaking work to remove (hands, feet, and breasts of statues at human height are particularly vulnerable).

Schoolchildren touch a sarcophagus. Credit: Sebastian Meyer for The Telegraph.

The Grant Museum has specific objects that can be handled, and UCL Museums have object-based learning programs to introduce students and specific groups to handling museum objects. [ii] Many other museums have touch tables or touch sessions where you can feel the weight of hand axes or porcupine quills. Don’t despair if you’re asked not to touch something in a museum – we’re not angry, we just want to make sure they’re preserved for future museum visitors to enjoy.



[i] Touching History: An evaluation of Hands On desks at The British Museum. 2008. Morris Hargreaves Mcintyre.

[ii] UCL Museums Touch & Wellbeing; Object-Based Learning

Conservation Advice – Handling Museum Objects. 2015. Southeast Museums.

Question of the Week: What is that object?

By Stacy Hackner, on 18 February 2015


By Stacy Hackner

One of the most frequent questions I’m asked isn’t about history or osteology. It’s “can you tell me what that thing is?” Many objects in the UCL Museums don’t have explanatory labels, so it’s understandable that visitors don’t know. However, it’s usually the case that we don’t know either! In archaeology, a number of excavated items are recorded with detailed descriptions of size, weight, material, but no conclusion as to the purpose of the object. The Petrie houses a number of smooth pebbles from predynastic-era graves. When those people had the technology to make wheel-thrown pottery and intricately carved stone vessels, why be buried with a simple stone? The anthropological answer is that it served a ritualistic purpose; the humanistic answer is that somebody saw a smooth stone they liked, one that felt good to keep in the hand and rub, and it became important to them. I have stones that remained in coat pockets for years, getting smoother and smoother from my touch. It doesn’t necessarily have to be “totemic”. Other artifacts are confusing because they look like modern items. One visitor asked me about a clay object that looked like a cog.


UC18527. Image courtesy Petrie catalogue.

I had no idea what it was! We do have various sorts of cogs from ancient times, like waterwheels and the Antikythera mechanism, but in this case I thought I could solve the mystery quite easily. The object had a UC number, indicating its place in the Petrie catalogue. I looked it up on the web (the catalogue is open-access) and found out it’s actually an oil lamp: if you look closely, you can see traces of burning in the centre. The same goes for the Grant Museum’s catalogue – if you can find the specimen’s number, you can look up the name. Then it’s fun to Google the animal and see what it looked like with all its fur on – the tenrec is my favourite example. With only the skeleton it looks like any other small mammal, but when complete it’s like a cross between a hedgehog and a fiery caterpillar.

If you’d like to know what something is, please do ask! We may not know, but love to learn about all the amazing objects around us.

Why Talk to Engagers?

By Stacy Hackner, on 9 February 2015


by Stacy Hackner

Most of my engagements, regardless of the museum, are quite short. Visitors ask a few questions, I talk theirs ears off about bones and Nubians for about ten minutes, we banter, and then they leave. It’s not their fault or, I hope, mine; I know people have places to go and didn’t schedule in the requisite half hour an over-enthusiastic archaeologist  needs to fully explain the intricacies of bone cells, astronauts exercising in space, perceptions of Egyptian hegemony, and working within the Human Tissue Act. Occasionally, though, I happen upon that rare individual or group who is/are both fascinated and unfettered by a strict schedule of museum tourism. These engagements can last anywhere from 30 minutes to two hours and, while intellectually exhausting, really accomplish the Student Engagers’ goals of learning just as much from the public as they do from us. Once people learn that they are not bothering us and that our work is simply to discuss research with them, the museum turns into a salon of ideas and facts.

Beyond my PhD work, I consider myself a perpetual student. I’m always learning, reading, interpreting, evaluating data from numerous sources. I’m actually a little afraid of graduating and not being able to call myself a student, as I plan to continue learning despite my future job title, whatever it may be. Engaging with such fascinating people gives me hope that I’ll be able to continue learning from the general public (almost total strangers) because we live in world with really interesting people who are full of knowledge and eager to share. I’d like to describe a few of my best engagement sessions over the past two years.

In the Grant, I met a gentleman dressed in 100% vintage 60s rocker garb, including a fantastically feathered hat. After I introduced myself, he identified my American accent and asked where I was from. I said Chicago. “Chicago!” He exclaimed. “I love it there – I went to film Wayne’s World.” He worked in the music business and was a “band member” in the movie. We talked about the music scene and Chicago museums and the doubtful art of identifying skeletal remains of musicians by extra bone on their fingers. I feel like I got a unique insight into the London music scene in the 70s and 80s, which I love to listen to but have never considered ethnographically. A year later, I saw him again and he recognised me.

In the Petrie, I met an American woman with two children. The kids, age 11 and 13, wanted to know all about mummies. I feed off others’ enthusiasm, especially kids’, since everything is new and amazing to them. The older boy had been studying ancient cultures in school and we had a great time talking about what counts as “history”. They were also interested in the hieroglyph charts, which I explained isn’t a one-to-one correlation with English letters, but ideographic symbols: for instance, if you spell the name Max, you need to put a little drawing of a boy next to it to symbolize that it’s the boy called Max. Otherwise it could represent the maximum amount or, with a king sign, the Pharaoh Max. When the kids went on a Petrie trail, I learned that both their parents are film producers in town for a shoot.

Only a few weeks ago in the Grant, I met a couple who can only be described as vociferous enthusiasts of the natural world. The lady, originally from Tasmania, used to foster wombats and bandicoots; from her I learned of a number of marsupials (or macropods), all new to me, as well as the natural history of Tasmania. Did you know there are two types of koalas? And one of the smallest marsupials, the potoroo, was also unknown to me; there were four species, three of which are endangered and one of which is extinct. Her companion is a fish epidemiologist – probably the only career that gets more raised eyebrows outside the scientific community than bioarchaeologist – and studies infectious disease of fish. What infects fish, I asked? Many things! Fish can get parasites and viruses just like humans; bereft of an example of waterborne bacteria, he pointed out cholera, the obvious example. This couple comes down to London from Scotland every few months to enjoy its dual pleasures of natural history museums and dim sum, which tie together quite nicely considering the unusual species found in both. I was totally enthralled, and felt like I’d just had a lesson in the best mixture of history and old-fashioned naturalism and bacteriology.

Heather, Sandy and Benjamin, Freycinet (704)

Thanks to TW for this picture of a baby wombat!

Really, museums are not just places for learning: they are the center of an exchange of ideas. Whether that involves looking at old things in new ways, new things in old ways, or opening someone’s eyes to a totally different perspective, I really appreciate my interactions with visitors. Please come down to the museums and talk to the Engagers – we’d love to be enlightened!

Did we evolve to run?

By Stacy Hackner, on 5 January 2015

By Stacy Hackner

A few years ago, spurred by my research on just how deleterious the sedentary lifestyle of a student can be on one’s health, I decided to start running. Slowly at first, then building up longer distances with greater efficiency. A few months ago, I ran a half-marathon. At the end, exhausted and depleted, I wondered: why can we do this? Why do we do this? What makes humans want to run ridiculous distances? A half-marathon isn’t even the start – there are people who do full marathons back-to-back, ultra-marathons of 50 miles or more, and occasionally one amazing individual like Zoe Romano, who surpassed all expectations and ran across the US and then ran the Tour de France.[i] Yes, ran is the correct verb – not cycled.

I’ve met so many people who tell me they can’t run. They’re too ungainly, their bums are wobbly, they’re worried about their knees, they’re too out of shape. Evolution argues otherwise. There are a number of researchers investigating the evolutionary trends for humans to be efficient runners, arguing that we are all biomechanically equipped to run (wobbly bums or not). If you have any question whether you can or can not run, just check out the categories of races in the Paralympic Games. For example, the T-35 athletics classification is for athletes with impairments in ability to control their muscles; in 2012, Iiuri Tsaruk set a world record for the 200m at 25.86s, which is only 6 seconds off Bolt’s world record at 19.19 and 4 seconds off Flo-Jo’s womens record (doping aside). 2012 also saw the world record for an athlete with visual impairment: Assia El Hannouni ran 200m in 24.46.[ii] You try running that fast. Now try running with significant difficulty controlling your limbs or seeing. If you’re impressed, think about these athletes the next time you say you can’t run.


Paralympian Scott Rearden. Wikimedia Commons.

Let’s think about bipedalism for a bit. Which other animals walk on two legs besides us? Birds, for a start, although flight is usually the primary mode of transport for all except penguins and ostriches. On the ground, birds are more likely to hop quickly than to walk or run. Kangaroos also hop. Apes are able to walk bipedally, but normally use their arms as well. Cockroaches and lizards can get some speed over short distances by running on their back legs. However, humans are different as we always walk on two legs, keep the trunk erect rather than bending forward as apes do, keep the entire body relatively still, and use less energy due to stored kinetic energy in the tendons during the gait.[iii] Apparently we can group our species of strange hairless apes into the category “really weird sorts of locomotion” along with kangaroos and ostriches.

Following this logic, Lieberman et al point out that a human could be bested in a fight with a chimp based on pure strength and agility, can easily be outrun by a horse or a cheetah in a 100m race, and have no claws or sharp teeth: “we are weak, slow, and awkward creatures.”[iv] We do have two strokes in our favor, though – enhanced cognitive capabilities and the ability to run really long distances. Our being awkwardly bipedal naked apes actually helps more than one would think. First, bipedalism decouples breathing from stride. Imagine a quadruped running – as the legs come together in a gallop, the back arches and forces the lungs to exhale like a bellows. Since humans are upright, the motion of our legs doesn’t necessarily affect our breathing pattern. Second, we sweat in order to cool down during physical exertion. (In particular, I sweat loads.) Panting is the most effective way for a hairy animal to cool down, as hair or fur traps sweat and doesn’t allow for effective convection (imagine standing in a cool breeze while covered in sweat – this doesn’t work for a dog.) But it’s impossible to pant while running. So not only are humans able to regulate breathing at speed, but we can cool down without stopping for breath.

From a purely skeletal perspective, there is more evidence for the evolution of running. Human heads are stabilized via the nuchal ligament in the neck, which is present only in species that run (and some with particularly large heads), and we have a complex vestibular system that becomes immediately activated to ensure stability while running. The insertion on the calcaneus (heel bone) for the Achilles tendon is long in humans, increasing the spring action of the Achilles.[v] Humans have relatively long legs and a huge gluteus maximus muscle (the source of the wobbly bum). All of these changes are seen in Homo erectus, which evolved 1.9 million years ago.[vi]

H. erectus skeleton with adaptations for running (r) and walking (w). From Lieberman 2010.

H. erectus skeleton with adaptations for running (r) and walking (w). From Lieberman 2010.

The evolutionary explanation for this is the concept of endurance or persistence hunting. In a hot climate, ancient Homo could theoretically run an animal to death by inducing hyperthermia. This is also where we come full circle and bring in the cognitive capabilities of group work. A single individual can’t chase an antelope until it expires from heat stroke because it’ll keep going back into the herd and then the herd will scatter. But a team of persistence hunters can. If persistence hunting is how humans (or other Homo species) evolved to be great at long distance running, that’s also the why humans developed larger brains: the calories in meat generated an excess of calories that allowed nourishment of the great energy-suck that is the brain. However, persistence hunting is a skill that mostly went by the wayside as soon as projectile weapons (arrowheads and spears) were invented, possibly around 300,000 years ago. Why? Because humans, due to our large brains, are very inventive, but also very lazy. Any expenditure of energy must be made up for by calories consumed later, at least in a hunting and gathering environment – so less energy output means less energy input; a metabolic balance. Thus we have the reason why humans can run, but also why we don’t really want to. (As an aside, some groups such as the Kalahari Bushmen practiced persistence hunting until recently, although they had projectile weapon technology, probably because of skill traditions and retaining cultural practices. Humans are always confounding like that.)

Which brings up another point: gathering. As I’ve written before, contemporary hunter-gatherers like the Hadza rely much more on gathering than hunting. Additionally, it is possible that the first meat eaten by Homo species was scavenged rather than hunted. There is no such evolutionary argument as endurance gathering. If ancient humans spent much more time gathering, why would we evolve these particular running mechanisms? As with many queries into human evolution, these questions have yet to be answered. Either way, it’s clear that humans have a unique ability. Your wobbly bum is, in fact, the key to your running. Another remaining question is why we still have the desire to continue running these ridiculous distances – a topic for a future post, perhaps.


[i] http://www.zoegoesrunning.com

[ii] Check out all the records at http://www.paralympic.org/results/historical

[iii] Alexander, RM. Bipedal Animals, and their differences from humans. J Anat, May 2004: 204(5), 321-330.

[iv] Lieberman, DE, Bramble, DM, Raichlen, DA, Shea, JJ. 2009. Brains, Brawn, and the Evolution of Human Endurance Running Capabilities. In The First Humans – Origins and Early Evolution of the Genus Homo (Grine, FE, Fleagle, JG, Leakey, RE, eds.) New York: Springer, pp 77-98.

[v] Raichlen, DA, Armstrong, H, Lieberman, DE. 2011. Calcaneus length determines running economy: implications for endurance running performance in modern humans and Neandertals. J Human Evol 60(3): 299-308.

[vi] Lieberman, DE. 2010. Four Legs Good, Two Legs Fortuitous: Brains, Brawn, and the Evolution of Human Bipedalism. In In the Light of Evolution (Jonathan B Losos, ed.) Greenwood Village, CO: Roberts & Co, pp 55-71.