A A A

Movement Taster – Blockages in the system: health research in postwar Britain

By Kevin Guyan, on 19 May 2014

Kevin GuyanRuth

 

 

 

 

 

By Kevin Guyan and Ruth Blackburn

This taster is from a larger presentation, Blockages in the system: health research in postwar Britain, which forms part of the Student Engagers’ Movement event taking place at UCL on Friday 23 May. What follows is a sample of the interdisciplinary work by PhD students Kevin Guyan, Department of History, and Ruth Blackburn, Department of Primary Care and Population Health, linking their interests in 20th century British history and health sciences. Movement will also relate these ideas to objects from UCL Collections as well as giving attendees an audiovisual experience of travelling on a London Routemaster bus.

 

Bus driver and conductor © Transport for London

Bus driver and conductor © Transport for London

 

The links between good physical health and exercise have only relatively recently been established. In the postwar decades there was particular interest in investigating heart disease: an increasingly common ailment with causes that were poorly understood at the time.  Jerry Morris (1910-2009), Emeritus Professor of Public Health at London School of Hygiene and Tropical Medicine and commonly referred to as the father of exercise epidemiology, was the first to establish proof that the frequency and severity of heart disease was reduced among workers who did more active jobs.

He made this discovery in the late 1940s by conducting an innovative and efficient ‘experiment’ that studied the behaviour and indicators of physical health in several thousand London Transport employees; particularly focusing on health differences between bus drivers and conductors. The selection of the two study groups was critical for the success of the experiment. This is because the bus drivers and conductors were very similar groups of people in most respects (e.g. age, socio-economic status and diet) but differed in terms of the amount of physical activity that was undertaken whilst at work.

By studying differences in the rates of cardiovascular disease between these two groups the ‘bus men study’ showed that the additional physical activity that bus conductors undertook whilst at work was associated with a 50 per cent reduction in heart disease. This finding was the first real evidence to demonstrate that being more active brought substantial health benefits and highlighted the importance of exercise as a public health intervention.

It is now time to position Jerry Morris’s study within the wider context of postwar London, showing that his research on the health of London transport workers was a product of its time and is an interesting example of broader changes in how ‘experts’ were understanding and explaining human action and behaviour.

Morris addressing the 1954 World Conference of Cardiology in Washington DC © The Telegraph

Jerry Morris in 1954 © The Telegraph

The decades following the Second World War experienced a widening of ‘expert knowledge’, particularly within fields linked to the physical and social health and well-being of citizens.  The esteem of qualities associated with experts also underwent a shift: moving from the predominance of highbrow cultures (for example, the humanities) to also include masters of science, skill and technology. This period was witness to the rise of the scientific and technical expert.

The belief that experts were striving for a ‘New Jerusalem’, a utopian ideal removed from the realities of postwar austerity, often distract discussions of British planning.  However, there was undoubtedly a political dimension to these projects, reflecting the politics of the Left, Fabianism and the Labour Party. It is not coincidental that Morris was a Socialist and championed the need for state intervention to improve the welfare of the population throughout his life’s research. In his work, the line between science and politics is often blurred – expressing the view that positivist forms of science work in tandem with socialist principles.  In this political vision of a New Britain, the rational and modern nation would require the successful management of health and disease.  Morris and his expert knowledge of epidemiology would therefore position him as a central figure in this imagined future.

This interest in the political led to what is arguably the most interesting development in his work: his definition of the individual. Morris did not focus on moral deviancy or communities positioned on the edge of society; nor, in his ‘bus men study’, was his primary focus the influences of class or social situation.  Instead, his chief research interests were individual actions and ways of living, removed from their social and economic contexts.

By moving the focus of one’s likelihood to encounter disease away from social class or community and instead considering the activities that individuals perform, although throughout his life’s work Morris was deeply interested in how socioeconomic factors affect the activities people perform, the ‘bus men study’ differed from the approach of scientists before him.  Importantly, the fluid nature of modern life was also acknowledged and the need to view subjects as ‘changing people’ operating in changing social environments. As experts grew more willing to challenge the influences of social class and instead consider the complex effects of social and biological relations, ‘ways of living’ emerged as a primary factor in the study of health and disease.  The offshoot of this finding was groundbreaking: a call for the reform of everyday lifestyles. With this conclusion, Morris’s ‘bus men study’ should not only be viewed as a key text in epidemiology but also as part of a wider shift in 20th century Britain over the role of scientific expertise and definitions of the individual.

Health and the male body

Health and the male body

Viruses of Mice and Men

By Gemma Angel, on 3 June 2013

Sarah Savage by Sarah Savage

 

 

 

 

 

Recently in the Grant Museum, I had the most exciting 35 minute engagement with a mother and son visiting London from Jersey in the Channel Islands.  Since her son was very interested in coming to UCL for undergraduate study, the mum thought the best idea would be to visit the campus and see all that UCL has to offer, including the museums on campus.  I caught this family on their first stop on the UCL museums trail.  After introducing myself and telling the boy’s mother a little bit about the UCL student engagers group, she quickly asked what my research is specifically about. I told her that I am an historical epidemiologist specializing in the Spanish Influenza Pandemic 1918-19, and the Encephalitis Lethargica Epidemic 1917-1930. Her eyes grew quite wide and she replied that her son had been hoping to meet someone doing research like mine, to find out more about pandemics. Her main reference point for Spanish Influenza was that the character Edward Cullen from the Twilight films had died from the pandemic! Alas, I encounter that response quite often. If anything, Twilight put the ‘forgotten pandemic’ on the radar of the general population and teenage girls everywhere.[1]

Spanish Influenza 1Although previously I’ve mainly engaged in the Petrie Museum next to objects of everyday Egyptian life that relate to disease, I found that amongst the great preserved animals of medical colleges past, many fascinating connections to my research topic presented themselves in conversation with visitors. The display of parasitic worms, although admittedly horrifying, can be used as a tool to demonstrate how a virus inhabits and travels through the body. A gentleman visitor later in the afternoon stood in shock when confronted with the incredible size of some of the parasitic worms that are able to live in the human body. 

The brave visitor from Jersey further engaged with me to discuss exactly how viruses spread through the body, mutate, and ‘disappear’ after an outbreak. I put disappear in parenthesis, since some viruses can simply become dormant in the body.  During our conversation, she inquired as to what initially drew me to epidemics. “Most young students do not dream of studying viruses that wipe out entire populations for a living!” she told me. spanish Influenza 2Oh, but I was that student, fascinated by the plague, and how tiny organisms could exist in our bodies. Once I’d told her more about my academic background in the United States, she asked me how common it is for historians to examine medicine or epidemics. Although UCL previously had a Centre for the History of Medicine for postgraduate researchers, now we are divided amongst different disciplines including history, neurology, and psychology.  As an historian specialising in epidemics, I explained to her that I am not only interested in the physical side of how epidemics work, but also how societies react to an outbreak.  During the 1918-19 Spanish Influenza outbreak, governments in England and the United States quarantined areas of cities and closed all government buildings. Although these measures prevented the spread of the virus to some extent, many citizens became infected prior to the required quarantines and closures. There are many links between government measures and public behaviour during historical influenza epidemics during the early 20th century and the avian and swine flu outbreaks of present day.The visitor mentioned the 2009 Swine Flu outbreak, and how the fear of coming into contact with an infected person effected daily life and decisions to frequent public spaces. By the end of our lengthy conversation, we had discussed everything from 20th century epidemics to life on the Channel Islands and life as a UCL student. After her son had finished peering into every case in the Grant Museum, his mother expressed how enlightened and intellectually stimulated she felt to discuss such a specialised topic with a UCL researcher, before moving on to encounter another member of our team at one of UCL other museum spaces. As a new team member, this was a heartening conclusion to a very inspiring conversation, and I am thoroughly looking forward to future conversations with museums visitors from all over the world…

 


References

[1] Alfred W. Crosby: America’s Forgotten Pandemic: The Influenza of 1918, Cambridge: Cambridge University Press, (2003).

 

 

Toxic Tattoos: Mercury Based Pigments in the 19th and 20th Centuries

By Gemma Angel, on 4 February 2013

  by Gemma Angel

 

 

 

 

 

In January this year, myself and fellow Research Engager Sarah Chaney went to visit the UCL Geology Collections, to see if there were any mineral or rock samples in the collection that would fit in with our upcoming cross-collections exhibition, Foreign Bodies. Neither of us being geologists, we didn’t have particularly high expectations – how interesting can rocks be, really? As it turned out, the answer to that question is – very! We spent a fascinating hour in the Rock Room, where we quickly realised that there were many specimens that could be interpreted as foreign bodies in one way or another: The fossilised forms of plants and animals in rock; a rusted nail fused into a lump of lava; and perhaps the ultimate foreign body, a beautifully patterned fragment of meteorite.

One particular sample drew my attention – a surprisingly heavy lump of purplish-red rock with pretty pink and bright red veins (pictured below). When I asked if I could have a closer look, I was told that I would have to wear gloves to handle this piece of rock, as it was in fact toxic. The rock sample was cinnabar, the common ore of mercury. I am well aware of the toxicity of mercury from my own research – gloves are also required when I’m handling preserved tattooed human skins as part of my work at the Science Museum archives. It is speculated that one of the substances used in the dry-preservation process of human skin is mercuric sulphide, and many of the specimens betray the typical orange-red staining that this chemical causes. But there is another unexpected connection between mercury and my research. Cinnabar has been used to make bold red pigments since antiquity – and this pigment was also historically used in European tattooing.

Cinnabar ore and powder (8.5% Hg) sample, in the UCL Rock Room.
UCL Geology Collections.

 

Red mercuric sulphide occurs naturally, and has been manufactured for use as a pigment since the early Middle Ages. The pigment was referred to interchangeably as vermilion or cinnabar, although vermilion became the more commonly used term by the 17th century. [1] Vermilion is now the standard English name given to red artists’ pigment based on artificially produced mercuric sulphide. [2] Since the toxic effects of mercury were historically well known, it might seem strange that cinnabar was used in tattooing at all. In fact, mercury has been used in medicine to treat a range of ailments throughout history, most notably syphilis. In European tattooing, red pigments were not commonly used pre-20th century, with red inks tending to be used sparingly for small areas of embellishment.

Most cinnabar was mined in China and by the mid 19th century, Chinese vermilion was generally considered to be the purest form, producing a superior hue to the European variety. The cinnabar ore on which vermillion production depended was costly; as a result, European vermilion was often mixed with inexpensive materials including brick, orpiment, iron oxide, Persian red, iodine scarlet, and minium (red lead). Whilst these additives also produced a bright red pigment, their relative impermanence made it an inferior choice for artists’ colours.

This may explain why there is marked variability amongst preserved tattoos containing red inks, in terms of both permanence and vibrancy of colour: The more commonly available and cheaper European variety of vermilion used by some 19th century tattooists likely contained additives which reduced colour saturation, and made the pigment more susceptible to light-degradation over time. The Wellcome Collection possesses only a handful of tattoos containing red dye, and most of these are very degraded, such that little colour is visible. In these cases, the red has often faded far more dramatically than the black ink used in the same tattoos. However, there are one or two preserved specimens containing exceptionally bright ink, which has lost none of its vivid red colour, an example of which can be seen below.

Tattooed human skin with bold red pigment, likely cinnabar.
Science Museum object no. A687. Photograph © Gemma Angel,
courtesy of the Science Museum London.

 

Since heavy mineral pigments do not generally lose saturation over time, it is possible to speculate that the bold red ink seen here very likely contains a high concentration of cinnabar, although it is impossible to know for certain without physical testing. There are, however, historical references to the use of mercury-based pigments in tattooing, most of which can be found in 20th century medical journals. As may be expected, these sources focus on the toxic effects of cinnabar-based tattoo pigments. In particular, mercury dermatitis in tattoos was sometimes reported during the early-mid 20th century, often many years after the tattoo was acquired by the patient.

In 1930, one such case appeared in the Archives of Dermatology and Syphilology, written by Dr. Paul Gerson Unna. His patient, a 63-year-old man who had been tattooed in his youth, suddenly developed itching, swelling and blistering in the red portions of the tattoo, following a mercury-based treatment for haemorrhoids. Three years later, Dr. D. B. Ballin reported a case in which a young male patient had developed itching, swelling and oozing in the red portions of a tattoo, 2 years after he had been tattooed. The patient was treated by the removal of the affected areas using a dermal punch, and the tattooed skin samples were sent for histological testing; however, the resultant scar tissue in the punched areas later developed the same reaction.

Photograph from Ballin’s 1933 report,
Cutaneous Hypersensitivity to Mercury from Tattooing
Caption reads: “Forearm of patient showing sensitivity
to mercury as a result of tattooing.”

Throughout the 1940s and 50s, cases of mercurial sensitivity and dermatitis in red tattoos appear sporadically in the medical literature, [4] though the apparent causes of the onset of symptoms vary. According to Keiller and Warin:

In some cases the use of mercurial applications elsewhere has led to the development of sensitivity and the red areas of the tattoo have subsequently become swollen. Other cases are reported in which the sensitivity has developed spontaneously. [5]

Interestingly, there were also reports of the apparent ‘positive’ effects of cinnabar tattoo pigments in cases of cutaneous syphilis during the early 20th century. It was observed that the red portions of a tattoo were seldom effected by syphilis sores – even in cases where adjacent areas of skin tattooed in black ink were engulfed by the infection.

 


References:

[1] R. D. Harley: Artists’ Pigments c.1600-1835: A Study in English Documentary Sources, (1982) Butterworth Scientific, p.125.

[2] Rutherford J. Gettens et. al. : ‘Vermilion and Cinnabar’, in Studies in Conservation, Vol. 17 No. 2. (May 1972), p.45. Available on JSTOR: http://www.jstor.org/stable/1505572

[3]  D. B. Ballin: ‘Cutaneous Hypersenistivity to Mercury From Tattooing’, in Archives of Dermatology and Syphilology, Vol. 27, No.2 (February 1933), pp.292-294.

[4] See, for example: Howard I. Goldberg: ‘Mercurial Reaction in a Tattoo’, in Canadian Medical Association Journal, Vol. 80 (Feb. 1 1959), pp.203-204. Available online: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1830587/ ; also R. A. G. Lane et. al.: ‘Mercurial Granuloma in a Tattoo’, in Canadian Medical Association Journal, Vol. 70 (May 1954), pp.546-548. Available online: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1825326/

[5] F. E. S. Keiller & R.P. Warin: ‘Mercury Dermatitis in a Tattoo: Treated With Dimercaprol’, in The British Medical Journal, Vol. 1, 5020 (Mar. 23, 1957), p.678. Available on JSTOR: http://www.jstor.org/stable/20361174

[6] For more on the history of tattooing and skin disease, see Gemma Angel: ‘Atavistic Marks and Risky Practices: the Tattoo in Medico-Legal Debate 1850~1950’, in J. Reinarz & K. Siena (eds.) A Medical History of Skin: Scratching The Surface, Pickering Chatto, (2013) pp.165-179.

Constantly Changing, Ever Evolving. HIV: Adapting to Change

By Gemma Angel, on 30 July 2012

by Alicia Thornton

 

 

 

 

 

As someone whose background is in biological sciences, working in the Grant Museum of Zoology feels a little like coming home. Robert Edmond Grant collated the collection for the teaching of comparative anatomy and zoology, showing the differences and similarities between species. The collection is hugely diverse; from sponges and other marine invertebrates (in which Grant held a particular interest) to skeletons of primates, elephants, big cats and other mammals. The collection even has examples of some animals which are now extinct. Most notable are the quagga, a zebra-like creature from Southern Africa which was hunted to extinction in the wild around the time that Grant was teaching at UCL; and the thylacine or Tasmanian tiger. The thylacine was a marsupial native of Australia, also hunted to extinction, during the early 20th century. The museum also has bones from a Dodo, which died out  as the result of a combination of factors, including hunting and predation by imported species introduced by European settlers.

 

For me, what the collection shows so well, through its diversity, is how every organism is adapted to the environment in which they live. Each species or subspecies has evolved to have a unique way of living and their biology gives a complete illustration of this.  For example, the shape of a jaw indicating the type of diet an animal has, or the dimensions of the limbs showing how an animal may swing through trees or stalk prey in grassland.  As I near the end of the first year of my PhD, I find that it is sometimes easy to get too engrossed in the details of my research and lose sight of what interested me about the topic in the first place. The Grant museum serves as a perfect reminder. My own research is focused on infectious diseases, and specifically human immunodeficiency virus (HIV). The way in which the virus has evolved and continues to evolve has been one of the biggest challenges for scientists and medics working in HIV treatment, care and research.

Like all viruses, HIV requires a living cell to reproduce. During infection, the virus enters the human cells and uses the machinery of the host cell to replicate, producing further infectious particles and releasing them to continue the infection cycle. In order to be successful and survive, the virus must find mechanisms by which it can evade the response of the host immune system that is designed to eliminate it. In fact, HIV is perfected suited to this; having the ability to infect cells which constitute a key component of the immune system as well as those which are out of the reach of the immune system.

Due to the nature of its replication, HIV evolves particularly fast and thus has the ability to survive changing environments.[1]  A huge range of drugs to treat HIV have been developed  since the beginning of the epidemic. These drugs have been a huge success, allowing people to live much healthier lives. Where they are readily available they have dramatically reduced the numbers of people who develop AIDS[2] and increased life expectancy of HIV positive individuals to almost that of HIV negative individuals[3].  Yet they never eliminate the virus completely and as new drugs are introduced, the virus rapidly evolves, giving rise to drug resistant strains and making treatment even more challenging.[4]

The 19th International AIDS conference was held in Washington DC, USA in July 2012. This is the largest of the HIV conferences with over 20,000 delegates, taking place every two years, and is attended by a mix of medics, nurses, public health professionals, advocacy groups and policy makers. Finding a cure for HIV was a key theme of the conference and like all HIV conferences, a large volume of work presented was focused on the development of new drugs and drug combinations. Increasing the range of drugs available means that doctors are more able to combat the development of drug resistance and keep their patient’s viral replication supressed.

The extent of the HIV epidemic is the result of a complex combination of social and scientific factors. However, there is no doubt that the virus’ ability to continually change and adapt to the environment in which it survives is a one of the key reasons that the infection remains such a challenge to control.

 

[1] Rambault A, Posada D, Crandall KA & Holes EC.  The Causes and Consequences of HIV Evolution. Nature Reviews Genetics 2004; 5(1): 52-61.

[2] Mocroft A, Ledergerber B, Katlama C, Kirk O, Reiss P, d’Arminio Monforte A, Knysz B, Dietrich M, Phillips AN, Lundgren JD; EuroSIDA study group. Decline in the AIDS and death rates in the EuroSIDA study: an observational study. Lancet. 2003; 362(9377):22-9.

[3] Nakagawa F, Lodwick RK, Smith CJ, Smith R, Cambiano V, Lundgren JD, Delpech V, Phillips AN.  Projected life expectancy of people with HIV according to timing of diagnosis. AIDS. 2012; 26(3):335-43.

[4] UK Collaborative Group on HIV Drug Resistance; UK CHIC Study Group.  Long-term probability of detecting drug-resistant HIV in treatment-naive patients initiating combination antiretroviral therapy.  Clin Infect Dis. 2010; 50(9):1275-85.