A A A

Changing Perspectives in Conservation

By Claire Asher, on 18 December 2014

Our views of the importance of nature and our place within have changed dramatically over the the last century, and the prevailing paradigm has profound influences on conservation from the science that is conducted to the policies that are enacted. In a recent perspectives piece for Science, GEE’s Professor Georgina Mace considered the impacts that these perspectives have on conservation practise.

Before the late 1960s, conservation thinking was largely focussed on the idea that nature is best left to its own devices. This ‘nature for itself‘ framework centred around the value of wilderness and unaltered natural reserves. This viewpoint stemmed from ecological theory and research, however by the 1970s it became apparent that human activities were having severe and worsening impacts on species, and that this framework simply wasn’t enough. This led to a shift in focus towards the threats posed to species by human activities and how to reduce these impacts, a ‘nature despite people‘ approach to conservation. This is the paradigm of protected areas and quotas, designed to reduce threats posed and ensure long-term sustainability.

Changing views of nature and conservation, Mace (2014)

Changing views of nature and conservation, Mace (2014)

By the 1990s, people had begun to appreciate the many and varied roles that healthy ecosystems play in human-wellbeing; ecosystem services are crucial to providing clean water, air, food, minerals and raw materials that sustain human activities. Shifting towards a more holistic, whole-ecosystem viewpoint which attempted to place economic valuations on the services nature provides, conservation thinking entered a ‘nature for people‘ perspective. Within this framework, conservationists began to consider new metrics, such as the minimum viable population size of species and ecosystems, and became concerned with ensuring sustainable harvesting and exploitation. In the last few years, this view has again shifted slightly, this time to a ‘people and nature‘ perspective that values long-term harmonious and sustainable relationships between humans and nature, and which includes more abstract benefits to humans.

Changing conservation paradigms can have a major impact on how we design conservation interventions and what metrics we monitor to assess their success. Standard metrics of conservation, such as the IUCN classification systems, can be easily applied to both a ‘nature for itself’ and a ‘nature despite people’ framework. In contrast, a more economic approach to conservation requires valuations of ecosystems and the services they provide, which is far more complex to measure and calculate. Even more difficult is measuring the non-economic benefits to human well-being that are provided for by nature. The recent focus on these abstract benefits may make the success of conservation interventions more difficult to assess under this framework.

The scientific tools, theory and techniques available to conservation scientists have not always kept up with changing conservation ideologies, and differing perspectives can lead to friction between scientists and policy-makers alike. In the long-term a viewpoint that recognises all of these viewpoints may be the most effective in directing and appraising conservation management. Certainly, greater stability in the way in which we view our place in nature would afford science the opportunity to catch-up and develop effective and empirical metrics that can be meaningfully applied to conservation.

Original Article:

() Science

Function Over Form:
Phenotypic Integration and the Evolution of the Mammalian Skull

By Claire Asher, on 8 December 2014

Our bodies are more than just a collection of independent parts – they are complex, integrated systems that rely upon precise coordination in order to function properly. In order for a leg to function as a leg, the bones, muscles, ligaments, nerves and blood vessels must all work together as an integrated whole. This concept, known as phenotypic integration, is a pervasive characteristic of living organisms, and recent research in GEE suggests that it may have a profound influence on the direction and magnitude of evolutionary change.

Phenotypic integration explains how multiple traits, encoded by hundreds of different genes, can evolve and develop together such that the functional unit (a leg, an eye, the circulatory system) fulfils its desired role. Phenotypic integration could be complete – every trait is interrelated and could show correlated evolution. However, theoretical and empirical data suggest that it is more commonly modular, with strong phenotypic integration within functional modules. This modularity represents a compromise between a total lack of trait coordination (which would allow evolution to breakdown functional phenotypic units) and the evolutionary inflexibility of complete integration. Understanding phenotypic integration and its consequences is therefore important if we are to understand how complex phenotypes respond to natural selection.

Functional modules in mammals, Goswami et al (2014)

Functional modules in mammals, Goswami et al (2014)

It is thought that phenotypic integration is likely to constrain evolution and render certain phenotypes impossible if their evolution would require even temporary disintegration of a functional module. However, integration may also facilitate evolution by coordinating the responses of traits within a functional unit. Recent research by GEE academic Dr Anjali Goswami and colleagues sought to understand the evolutionary implications of phenotypic integration in mammals.

Expanding on existing mathematical models, and applying these to data from 1635 skulls from nearly 100 different mammal species including placental mammals, marsupials and monotremes, Dr Goswami investigated the effect of phenotypic integration on evolvability and respondability to natural selection. Comparing between a model with two functional modules in the mammalian skull and a model with six, the authors found greater support for a larger number of functional modules. Monotremes, whose skulls may be subject to different selection pressures due to their unusual life history, did not fit this pattern and may have undergone changes in cranial modularity during the early evolution of mammals. Compared with random simulations, real mammal skulls tend to be either more or less disparate from each other, suggesting that phenotypic integration may both constrain and facilitate evolution under different circumstances. The authors report a strong influence of phenotypic integration on both the magnitude and trajectory of evolutionary responses to selection, although they found no evidence that it influences the speed of evolution.

Thus, phenotypic integration between functional modules appears to have a profound impact on the direction and extent of evolutionary change, and may tend to favour convergent evolution of modules that perform the same function (e.g bird and bat wings for powered flight), by forcing individuals down certain evolutionary trajectories. The influence of phenotypic integration on the speed, direction and magnitude of evolution has important implications for the study of evolution, particularly when analysing fossil remains, since it can make estimates of the timing of evolutionary events more difficult. Failing to incorporate functional modules into models of evolution will likely reduce their accuracy and could produce erroneous results.

Phenotypic integration is what holds together functional units within an organism as a whole, in the face of natural selection. Modularity enables traits to evolve independently when their functions are not strongly interdependent, and prevents evolution from disintegrating functional units. Through these actions, phenotypic integration can constrain or direct evolution in ways that might not be predicted based on analyses of traits individually. This can have important impacts upon the speed, magnitude and direction of evolution, and may tend to favour convergence.

Original Article:

() Global Environmental Change

nerc-logo-115NSF

This research was made possible by support from the Natural Environment Research Council (NERC), and the National Science Foundation (NSF).

The Best of Both Worlds:
Planning for Ecosystem Win-Wins

By Claire Asher, on 16 November 2014

The normal and healthy function of ecosystems is not only of importance in conserving biodiversity, it is of utmost importance for human wellbeing as well. Ecosystems provide us with a wealth of valuable ecosystem services from food to clean water and fuel, without which our societies would crumble. However it is rare that only a single person, group or organisation places demands on any given ecosystem service, and in many cases multiple stakeholders compete over the use of the natural world. In these cases, although trade-offs are common, win-win scenarios are also possible, and recent research by GEE academics investigates how we can achieve these win-wins in our use of ecosystem services.

Ecosystem services depend upon the ecological communities that produce them and are rarely the product of a single species in isolation. Instead, ecosystem services are provided by the complex interaction of multiple species within a particular ecological community. A great deal of research interest in recent years has focussed on ensuring we maintain ecosystem services into the long term, however pressure on ecosystem services worldwide lis likely to increase as human demands on natural resources soars. Ecosystem services are influenced by complex ecosystem feedback relationships and food-web dynamics that are still relatively poorly understood, and increased pressures on ecosystems may lead to unexpected consequences. Although economical signals respond rapidly to global and national changes, ecosystem services are thought to lag behind, often by decades, making it difficult to predict and fully understand how our actions are influences the availability of crucial services in the future.

Trade-offs in the use of ecosystem services occur when the provision of one ecosystem service is reduced by increased use of another, or when one stakeholder takes more of an ES at the expense of other stakeholders. However, this needn’t be the case – in some scenarios it is possible to achieve win-win outcomes, preserving ecosystem services and providing stakeholders the resources they need. Although attractive, win-win scenarios may be difficult to achieve without carefully planned interventions, and recent research from GEE indicates they are not as common as we might like.

In a comprehensive meta-analaysis of ecosystem services case studies from 2000 to 2013, GEE academics Prof Georgina Mace and Dr Caroline Howe show that trade-offs are far more common than win-win scenarios. Across 92 studies covering over 200 recorded trade-offs or synergies in the use of ecosystem services, trade-offs were three times more common than win-wins. The authors identified a number of factors that tended to lead to trade-offs rather than synergies. In particular, if one or more of the stakeholders has a private interest in the natural resources available, trade-offs are much more likely – 81% of cases like this resulted in tradeoffs. Furthermore, trade-offs were far more common when the ecosystem services in question were ‘provisioning’ in nature – products we directly harvest from nature such as food, timber, water, minerals and energy. Win-wins are more common when regulating (e.g. nutrient cycling and water purification) or cultural (e.g. spiritual or historical value) ecosystem services are in question. In the case of trade-offs, there were also factors that predicted who the ‘winners’ would be – winners were three times more likely to hold private interest in the natural resource in question, and tended to be wealthier than loosing stakeholders. Overall, there was no generalisable context that predicted win-win scenarios, suggesting that although trade-off indicators may be useful in strategic planning, the outcome of our use of ecosystem services is not inevitable, and win-wins are possible.

They also identified major gaps in the literature that need to be addressed if we are to gain a better understanding of how win-win scenarios may be possible in human use of ecosystems. In particular, case studies are currently only available for a relatively limited geographic distribution, and tend to focus of provisioning services. Thus, the lower occurrence of trade-offs for regulating and cultural ecosystem services may be in part a reflection of a paucity of data on these type of services. Finally, relatively few studies have attempted to explore the link between trade-offs and synergies in ecosystem services and the ultimate effect on human well-being.

Understanding how and why trade-offs and synergies occurs in our use of ecosystem services will be valuable in planning for win-win scenarios from the outset. Planning of this kind may be necessary if we are to achieve and maintain balance in our use of the natural world in the future.

Original Article:

nerc-logo-115ESPA_logoesrc-logo

This research was made possible by support from the Ecosystem Services for Poverty Alleviation (ESPA) programme, which is funded by the Natural Environment Research Council (NERC), the Economic and Social Research Council (ESRC), and the UK Department for International Development (ERC)

Life Aquatic:
Diversity and Endemism in Freshwater Ecosystems

By Claire Asher, on 6 November 2014

Freshwater ecosystems are ecologically important, providing a home to hundreds of thousands of species and offering us vital ecosystem servies. However, many freshwater species are currently threatened by habitat loss, pollution, disease and invasive species. Recent research from GEE indicates that freshwater species are at greater risk of extinction than terrestrial species. Using data on over 7000 freshwater species across the world, GEE researchers also show a lack of correlation between patterns of species richness across different freshwater groups, suggesting that biodiversity metrics must be carefully selected to inform conservation priorities.

Freshwater ecosystems are of great conservation importance, estimated to provide habitat for over 125,000 species of plant and animal, as well as crucial ecosystem services such as flood protection, food, water filtration and carbon sequestration. However, many freshwater species are threatened and in decline. Freshwater ecosystems are highly connected, meaning that habitat fragmentation can have serious implications for species, while pressures such as pollution, invasive species and disease can be easily transmitted between different freshwater habitats. Recent work by GEE academics Dr Ben Collen, in collaboration with academics from the Institute of Zoology, investigated the global patterns of freshwater diversity and endemism using a new global-level dataset including over 7000 freshwater mammals, amphibians, reptiles, fishes, crabs and crayfish. Many freshwater species occupy quite small ranges and the authors were also interested in the extent to which species richness and the distribution threatened species correlated between taxonomic groups and geographical areas.


The study showed that almost a third of all freshwater species considered are threatened with extinction, with remarkably little large-scale geographical variation in threats. Freshwater diversity is highest in the Amazon basin, largely driven by extremely high diversity of amphibians in this region. Other important regions for freshwater biodiversity include the south-eastern USA, West Africa, the Ganges and Mekong basins, and areas of Malaysia and Indonesia. However, there was no consistent geographical pattern of species richness in freshwater ecosystems.

Freshwater species in certain habitats are more at risk than others – 34% of species living in rivers and streams are under threat, compared to just 20% of marsh and lake species. It appears that flowing freshwater habitats may be more severely affected by human activities than more stationary ones. Freshwater species are also consistently more threatened than their terrestrial counterparts. Reptiles, according to this study, are particularly at risk from extinction, with nearly half of all species threatened or near threatened. This makes reptiles the most threatened freshwater taxa considered in this analysis. The authors identified key process that were threatening freshwater species in this dataset – habitat degradation, water pollution and over-exploitation are the biggest risks. Habitat loss and degradation is the most common threat, affecting over 80% of threatened freshwater species.

That there was relatively little congruence between different taxa in the distribution of species richness and threatened species in freshwater ecosystems suggests that conservation planning that considers only one or a few taxa may miss crucial areas of conservation priority. For example, conservation planning rarely considers patterns of invertebrate richness, but if these groups are affected differently and in different regions than reptiles and amphibians, say, then they may be overlooked in initiatives that aim to protect them. Further, different ways to measure the health of populations and ecosystems yield different patterns. The metrics we use to identify threatened species, upon which conservation decisions are based, must be carefully considered if we are to suceeed in protecting valuable ecosystems and the services the provide.

Original Article:

() Global Ecology and Biogeography

Rufford_logoEU_flag

This research was made possible by funding from the Rufford Foundation and the European Commission

Handicaps, Honesty and Visibility
Why Are Ornaments Always Exaggerated?

By Claire Asher, on 23 October 2014

Sexual selection is a form of natural selection that favours traits that increase mating success, often at the expense of survival. It is responsible for a huge variety of characteristics and behaviours we observe in nature, and most conspicuously, sexual selection explains the elaborate ornaments such as the antlers of red deer and the tail of the male peacock. There are many theories to explain how and why these ornaments evolve; it may be a positive feedback loop of female preference and selection on males, or ornaments may signal something useful, such as the genetic quality of the male carrying them. One way or another, despite the energetic costs of growing these ornaments, and the increase risk of predation that comes with greater visibility, sexually selected ornaments must be increasing the overall fitness of individuals carrying them. They do so by ensuring the bearer gets more mates and produces more offspring.

Theory predicts that sexually selected traits should be just as likely to become larger and more ostentatious as they are to be reduced, smaller and less conspicuous. However, almost all natural examples refer to exaggerated traits. So where are all the reduced sexual traits?

Runaway Ornaments

Previous work by GEE researchers Dr Sam Tazzyman, and Professor Andrew Pomiankowski has highlighted one possible explanation for this apparent imbalance in nature – if sexually selected traits are smaller, they are harder to see. Using mathematical models, last year they showed that differences in the ‘signalling efficacy’ of reduced and exaggerated ornaments was sufficient to explain the bias we see in nature. Since the purpose of sexually selected ornaments is to signal something to females, if reduced traits tend to be worse at signalling, then it makes sense that they would rarely emerge in nature. Their model covered the case of runaway selection, whereby sexually selected traits emerge somewhat spontaneously due to an inherent preference in females. It goes like this – if, for whatever reason, females have an innate preference for a certain trait in males, then any male who randomly acquires this trait will get more mates and produce more offspring. Those offspring will include males carrying the trait and females with a preference for the trait, and over time this creates a feedback loop that can produce extremely exaggerated traits. Under this model of sexual selection, differences in the signalling efficacy can be sufficient to explain why we so rarely see reduced traits.

Handicaps

However, this is just one model for how sexually selected ornaments can emerge, so this year GEE Researchers Dr Tazzyman and Prof Pomiankowski, along with Professor Yoh Iwasa from Kyushu University, Japan, have expanded their research to include another possible explanation – the handicap hypothesis. According to the handicap principle, far from being paradoxical, sexually selected ornaments may be favoured exactly because they are harmful to the individual who carries them. In this way, only the very best quality males can cope with the costs of carrying huge antlers or brightly coloured feathers, and so the ornament acts as a signal to females indicating which males carry the best genes. This is an example of honest signalling – the ornament and the condition or quality of the carrier are inextricably linked, and there is no room for poor quality males to cheat the system.

Using mathematical models, the authors investigated four possible causes for the absence of reduced sexual ornaments in the animal kingdom. Firstly, like the case of runaway selection, differences in signalling efficacy might explain the bias. Under the handicap hypothesis, ornaments act as signals of genetic quality, so it would be little surprise that their visibility or effectiveness at conveying the signal would be important. Smaller ornaments may simply be worse at attracting the attention of females, meaning that the benefits of the sexual ornament don’t outweigh the costs. Similarly, the costs for females of preferring males with reduced ornaments may be higher than for exaggerated ornaments, because it is easier to find males with exaggerated traits. Again, this could theoretically tip the balance of cost and effect away from selecting for reduced ornaments. An alternative explanation is that the costs of the ornament itself are different for reduced and exaggerated traits. This seems quite likely in many cases, since a large ornament would require more resources to grow. But in this case selection would be more likely to produce reduced ornaments with lower costs! In order to replicate the excess of exaggerated traits we see in nature, reduced traits would have to cost more – much less biologically plausible. However, if large ornaments tend to be more costly, then they may be more likely to be condition-dependent, a key tenant of the handicap principle. Exaggeration may be more effective at producing honest signalling and exaggerated traits may therefore be more useful to females as a signal of quality.

Honest Signals

The results of modelling highlighted two key ways in which exaggerated traits might be favoured by the handicap process. In the model, when exaggerated traits have a higher signalling efficacy or are more strongly condition-dependent, exaggerated traits tend to be more extreme than reduced traits. The model still predicts that reduced traits are equally likely to evolve, just that they will tend to be less extreme examples of ornamentation. The other two possible explanations – higher costs to females that prefer small ornaments or the males that carry them, failed to consistently produce the observed lack of reduced ornaments. Both explanations were able to produce this outcome under certain circumstances, but in other circumstances they produced the opposite effect. Exaggerated ornaments, therefore, may be more common because they are more effective signals that are more likely to be honest.

Based on this and previous work by Dr Tazzyman and colleagues, asymmetries in the signalling efficacy of reduced and exaggerated traits is sufficient to explain the lack of reduced traits in nature. Whether ornaments evolve via runaway selection or the handicap process, asymmetrical signalling efficacy tends to favour exaggerated traits. However, in the case of the handicap process, asymmetries in condition-dependance of the trait may also be involved. These two explanations are not mutually exclusive, and it is likely that in reality many factors are involved.

Importantly, for both explanations and for both type of sexual selection, the models still predict that exaggeration and reduction will be equally likely. The differences emerge in terms of how extreme the ornament will become. Thus, this work predicts that there are many examples of reduced ornaments in nature, perhaps we just haven’t found them yet. This is especially likely if reduced traits that might be less noticeable anyway also tend to be less extreme. Alternatively, there may be other asymmetries not yet considered that make reduced ornaments less likely to emerge in the first place.

Where Next?

The authors suggest some very interesting avenues for future research. Firstly, they suggest that more complex models investigating how multiple different asymmetries may act together to produce sexually selected ornaments will get us closer to understanding the intricate dynamics of sexual ornamentation. Secondly, these models have only considered cases where evolution of the trait eventually settles down – at a certain ornament size, the costs and benefits of possessing it are equal, and the trait should remain at this size. However, in some cases the dynamics are more complex, and traits undergo cycles of exaggeration and reduction. Research into the impact of asymmetries in condition-dependence and signalling efficacy in these ‘nonequilibrium’ models would yield fascinating insights into the evolution of sexual ornaments.

Original Article:

() Evolution

nerc-logo-115ERCepsrc-lowres

This research was made possible by funding from the Natural Environment Research Council (NERC), the Engineering and Physical Sciences Research Council (EPSRC), the European Research Council (ERC)

PREDICTS Project: Land-Use Change Doesn’t Impact All Biodiversity Equally

By Claire Asher, on 13 October 2014

Humans are destroying, degrading and depleting our tropical forests at an alarming rate. Every minute, an area of Amazonian rainforest equivalent to 50 football pitches is cleared of its trees, vegetation and wildlife. Across the globe, tropical and sub-tropical forests are being cut down to make way for expanding towns and cities, for agricultural land and pasture and to obtain precious fossil fuels. Even where forests remain standing, hunting and poaching are stripping them of their fauna, degrading the forest in the process. Habitat loss and degradation are the greatest threats to the World’s biodiversity. New research from the PREDICTS project investigates the patterns of species’ responses to changing land-use in tropical and sub-tropical forests worldwide. In the most comprehensive analysis of the responses of individual species to anthropogenic pressures to date, the PREDICTS team reveal strong effects of human disturbance on the geographical distribution and abundance of species. Although some species thrive in human-altered habitats, species that rely on a specific habitat or diet, and that tend to have small geographical ranges, are particularly vulnerable to habitat disturbance. Understanding the intricacies of how different species respond to different types of human land-use is crucial if we are to implement conservation policies and initiatives that will enable us to live more harmoniously with wildlife.

Red Panda (Ailurus fulgens)

Habitat loss and degradation causes immediate species losses, but also alters the structure of ecological communities, potentially destabilising ecosystems and causing further knock-on extinctions down the line. As ecosystems start to fall apart, the valuable ecosystem services we rely on may also dry up. There is now ample evidence that altering habitats, particularly degrading primary rainforest, has disastrous consequences for many species, however not all species respond equally to land-use change. The functional traits of species, such as body size, generation time, mobility, diet and habitat specificity can have a profound impact on how well a species copes with human activities. The traits that make species particularly vulnerable to human encroachment (slow reproduction, large body size, small geographical range, highly specific dietary and habitat requirements) are not evenly distributed geographically. Species possessing these traits are more common in tropical and sub-tropical forests, areas that are under the greatest threat from human habitat destruction and loss of vegetation over the coming decades. The challenge in recent years, therefore, has been developing statistical models that allow us to investigate this relationship more precisely, and collecting sufficient data to test hypotheses.

There are three key ways we might chose to investigate how species respond to land-use change. Many studies have investigated species-area relationships, which model the occurrence or abundance of species in relation to the size of available habitat. These studies have revealed important insights into the damage caused by habitat fragmentation, however they rarely consider how different species respond differently. Another common approach uses species distribution models to predict the loss of species in relation to habitat and climate suitability. These models can be extremely powerful, but require large and detailed datasets that are not available for many species, particularly understudied creatures such as invertebrates. The PREDICTS team therefore opted for a third option to investigate human impacts on species. The PREDICTS project has collated data from over 500 studies investigating the response of individual species to land-use change, and their database now includes over 2 million records for 34,000 species. Using this extensive dataset, the authors were able to model the relationship between land-use type and both the occurrence and abundance of species. One of the huge benefits of this approach is that their dataset enabled them to investigate these relationships in a wide range of different taxa, including birds, mammals, reptiles, amphibians and the often neglected invertebrates.

Modelling Biodiversity

Sunbear (Helarctos malayanus)
image used with permission from
Claire Asher (Curiosity Photographic)

The resulting model included the responses of nearly 4000 different species across four measures of human disturbance; lang-use type, forest cover, vegetation loss and human population density. The vast dataset, the PREDICTS team were able to compare the responses of species in different groups (birds, mammals, reptiles and amphibians, invertebrates, between habitat specialists and generalists and between wide- and narrow-raging species. Their results revealed a complex interaction between these factors, which influenced the occurrence and the abundance of species in different ways.

In general human-dominated habitats, such as urban and cropland environments, tended to harbour fewer species than more natural, pristine habitats. Community abundance in disturbed habitats was between 8% and 62% of the abundance found in primary forest, and urban environments were consistently the worst for overall species richness. In these environments, human population density and a lack of forest cover were key factors reducing the number of species. Human population density could impact species directly through hunting, or more indirectly through expanding infrastructure. However, these factors impact different species in different ways, so the authors next investigated different taxa separately.

Birds appear to be particularly poor at living in urban environments, most likely because they respond poorly to increases in human population. Forest specialists and narrow-ranged birds fare especially badly in urban environments; only 10% of forest specialists found in primary forest are able to survive in urban environments. Although the effect was less extreme, mammals were also less likely to occur in secondary forest and forest plantations than primary forest, and forest specialists were particularly badly affected.

Urban Pests
Although many species were unable to exist in disturbed habitats, those species that persisted were often more abundant in human-modified habitats than pristine environments. This isn’t particularly surprising – some species happen to possess characteristics that make them well suited to urban and disturbed landscapes; these are often the species that we eventually start to consider a pest because they are so successful at living alongside us (think pigeons, rats, foxes). These species tend to be wide-ranging generalists, although sometimes habitat specialists do well in human-altered habitats if we happen to alter the habitat in just the right. Pigeons, for example, are adapted to nesting on cliffs, which our skyscrapers and buildings inadvertently mimic extremely well. The apparent success of some species in more open habitats such as cropland and urban environments might also be partly caused by increased visibility – it’s far easier to see a bird or reptile in an urban environment than dense primary forest! This doesn’t explain the entire pattern, however, and clearly some species are simply more successful in human-altered habitats. They are in the minority, though.

Do Reptiles Prefer Altered Habitats?
One interesting finding was that for herptiles (reptiles and amphibians), more species were found in habitats with a higher human population density. This rather unexpected relationship might reflect a general preference in herptiles for more open habitats. Consistent with this, the authors found fewer species in pristine forest than secondary forest. However, upon closer inspection the authors found that herptiles do not all respond in the same way. Reptiles showed a U-shaped relationship with human population density – the occurrence of species was highest when there were either lots of people or no people at all. By contrast, amphibians showed a straight relationship, with increases in human population density being mirrored by increases in the number of species present. This highlights the importance of investigating fine-scale differences between species in their responses to human activities.

Filbert Weevil (Curculio occidentis)

Consistent with previous studies, the traits of species were very important in determining whether a species was found in human-altered habitats. Narrow-ranging species were much less likely to occur in any habitat than wide-ranging ones, but this difference was particularly clear for croplands, plantation forests and urban habitats. The extent of human impact was also a key factor determining the occurrence of species in different habitats. Forest cover, human population density and NDVI (a measure of vegetation loss taken from remote sensing) all reduced the number of species present. Measures of disturbance and species characteristics do not act in isolation – the best models produced by the PREDICTS team included interactions between these variables. Invertebrate numbers were lowest in areas of high human population density and high rates of vegetation loss.

This study is the first step in more detailed, comprehensive analyses of the responses of species to human activities. The power of this study comes not only from it’s large dataset and broad spectrum of taxonomic groups, but also from it’s ability to directly couple land-use changes with species’ traits such as range-size and habitat specialism. The authors say that the next major step would be to incorporate interactions between species in these models – the community structure of an ecosystem can have profound effects on the species living in it, and changes in the abundance of any individual species does not happen in isolation from the rest of the community.

Check out the PREDICTS Project for more information!

Original Article:

() Proc. R. Soc. B

nerc-logo-115new-bbsrc-colour

This research was made possible by funding from the Natural Environment Research Council (NERC), and the Biotechnology and Biological Sciences Research Council (BBSRC).

Calculated Risks:
Foraging and Predator Avoidance in Rodents

By Claire Asher, on 3 October 2014

Finding food is one of the most important tasks for any animal – most animal activity is focused on this job. But finding food usually involves some risks – leaving the safety of your burrow or nest to go out into a dangerous world full of predators, disease and natural hazards. Animals should therefore be expected to minimise these risks as much as possible – foraging at safer times of day, especially when there’s lots of food around anyway. This hypothesis is known as the “risk allocation hypothesis”, but it has rarely been tested in wild animals. Recent research from ZSL academic Dr Marcus Rowcliffe showed that the behaviour of the Central America agouti certainly seems to follow this pattern, and highlights the amazing plasticity of animal behaviour.

Central American Agouti
(Dasyprocta punctata)

Foraging, although essential, is always a compromise between finding food and avoiding being eaten by a predator. The aim of the game is to eat as much as you can whilst avoiding being eaten yourself, in order to live long enough and grow large enough to reproduce. Since finding food is one of the most important things an animal has to do, foraging behaviour has been subject to strong natural selection.

The risk allocation hypothesis predicts that prey species should focus their foraging effort at times of day that pose the least risk. So, if your main predator is active during the day, you best forage at night and vice versa. There ought to be some flexibility in this system too, though – if food in your habitat is plentiful, it should be easy to find enough to fill you and there is little need to take any additional risks. Conversely, if food is pretty scarce, you may be forced to take more risks than usual by foraging for longer or at more dangerous times of day.

In a recent study, academics from the Institute of Zoology, London, in collaboration with colleagues around the world, investigated this trade off between food and predator avoidance in the Central American Agouti (Dasyprocta punctata). The agouti’s biggest problem in life is the Ocelot (Leopardus pardalis), who primarily feed on agoutis. Using radio telemetry and camera trapping, the researchers investigated activity patterns of agouti living in areas with lots of Astrocaryum fruits, and those living in areas with less. They were able to generate an enormous dataset – over 30,000 camera trap records of agoutis, with a further 50 individuals radio collared and tracked!

Ocelots are highly nocturnal, and across nearly 500 camera trap observations, Ocelots were almost exclusively observed at night. During this time, agoutis were under a great deal of risk – the predation risk from Ocelots was estimated to be four orders of magnitude higher between dusk and dawn than during daylight hours. The foraging activity of agouties mirrored this – activity was highest during the day, with peaks first thing in the morning and again later in the afternoon. Most interestingly, these patterns differed for agoutis that lived in habitats with abundant fruit and habitats where fruit was sparse. When food availability was high, agoutis took fewer risks, leaving their burrows later in the morning and coming home again earlier at the end of the day. Overall their activity levels were lower, presumably because they didn’t need to forage for as long to find all the food they needed.

The results of this study support the risk allocation hypothesis, and show that animals are able to make complex calculations about risks and benefits based upon environmental conditions and alter their behaviour so as to minimise risks and maximise benefits. Only when food availability is high can agoutis afford to have a lie-in and avoid any ocelots returning home late.

Original Article:

() Animal Behaviour

nsf1

This research was made possible by funding from the National Science Foundation (NSF) and the Netherlands Foundation for Scientific Research

Applying Metabolic Scaling Laws to Predicting Extinction Risk

By Claire Asher, on 25 September 2014

The Earth is warming. That much were are now certain of. A major challenge for scientists hoping to ameliorate the effect of this on biodiversity is to predict how temperature increases will affect populations. Predicting the responses of species living in complex ecosystems and heterogenous environments is a difficult task, but one starting point is to begin understanding how temperature increases affect small, laboratory populations. These populations can be easily controlled, and it is hoped that the lessons learned from laboratory populations can then begin to be generalised and applied to real populations. Recent research from GEE academics attempted to evaluate the predictive power of a simple metabolic model on the extinction risk of single-celled organisms in the lab. Their results indicate that simple scaling rules for temperature, metabolic rate and body size can be extremely useful in predicting the extinction of populations, at least in laboratory conditions.

Current estimates suggest that over the next 100 years we can expect a global temperature rise of between 1.1°C and 6.4°C. This change will not be uniformly distributed across different regions however, with some areas expected to experience warming at twice the global average rate. Temperature is known to be a crucial component in some of the most basic characteristics of life – metabolism, body size, birth, growth and mortality rates. These characteristics have been shown to scale with temperature in an easily predictable way, formalised in the Arrhenius equation. This equation yields a roughly 3/4 scaling rule, so that as temperature increases, metabolism increases around 75% as fast. This relationship appears to hold true for a variety of taxa with different life histories and positions in the food chain. Models based upon this rule can be designed that are very simple, which makes it easy for scientists to collect the data needed to plug into the model. But are they accurate in predicting extinction?

Recent research conducted by GEE and ZSL academics Dr Ben Collen and Prof. Tim Blackburn, in collaboration with the University of Sheffield and The University of Zurich, investigated the predictive power of simple metabolic models on extinction risk in a single-celled protist Loxocephallus. They first collected data on the population and extinction dynamics of a population held at constant temperature. This data was fed into a model based on scaling laws for metabolic rates and temperature, which in turn attempted to predict extinction risk under different temperature changes. The researchers tested how real protists responded to temperature changes – for 70 days they monitored populations of the protist Loxocephallus under either decreases or increases in temperature. Populations began at 20°C and increased to 26°C or decreased to 14°C at different rates (0.5°C, 0.75°C, 1.5°C or 3°C each week). Most populations eventually went extinct, but these extinctions happened sooner in hotter environments, and mean temperature showed a strong correlation with the date at which the population went extinct. Extinction tended to happen sooner in populations subjected to more rapid warming.

None of this is particularly surprising, but what the researchers found when they ran their models was that, even with relatively minimal data to start out with (population dynamics under constant ‘normal’ conditions), and using only simple scaling laws to predict extinction, their model was able to accurately predict when populations would go extinct under different warming or cooling conditions, with an accuracy of 84%. One important factor was the specifics of the temperature changes that were input into the model – using average temperature across the experiment rather than actual temperature changes produced much less accurate results.

This research is a first step in creating models that may help us predict the future extinction dynamics of wild populations subjected to unevenly distributed climatic warming over coming decades. It is a long way from a simple model of a laboratory population to a model that can accurately predict the future of complex assemblages of wild animals that are also subject to predation, disease and a healthy dose of luck. But the fact that these models can work for simple systems in laboratory conditions is a great first start – if they didn’t work for these populations, we could be fairly sure they wouldn’t generalise to natural populations. This shows that simple phenomenological models based on basic metabolic theory can be useful to understand how climate change will effect populations.

Original Article:

nerc-logo-115

This research was made possible by funding from the Natural Environment Research Council (NERC).

The Importance of Size in the Evolution of Complexity in Ants

By Claire Asher, on 16 September 2014

Ants are amongst the most abundant and successful species on Earth. They live in complex, cooperative societies, construct elaborate homes and exhibit many of the hallmarks of our own society. Some ants farm crops, others tend livestock. Many species have a major impact on the ecosystems they live in, dispersing seeds, consuming huge quantities of plant matter and predating other insect species. One of the major reasons for their enormous success is thought to be the impressive division of labour they exhibit. Theory suggests that, during the evolution of ants, increases in colony size drove increases in the complexity of their division of labour. However, there have been few previous attempts to test the hypothesis. A recent paper by GEE’s Professor Kate Jones and Phd student Henry Ferguson-Gow tested this hypothesis across the Attine ants, a large neotropical group including the famous leaf-cutter ants.

Ants, along with other social insects such as some bees, wasps and termites, are eusocial. This means that reproduction in their societies is dominated by just one or a few queens, while most of the colony members never reproduce, but instead perform other important tasks such as foraging, nest construction and defence. This system initially puzzled evolutionary biologists, because it poses the question, “how do non-reproductive workers pass on their genes?”. More specifically, “how can genes evolve to generate different morphology and behaviour in workers if they never reproduce and pass those genes on?”. This question was resolved in the 1960s, when W.D. Hamilton proposed the concepts of inclusive fitness and kin selection. He pointed out that although members of the non-reproductive worker caste do not directly pass on their genes, they are helping to ensure the survival of their siblings. Closely related individuals, such as siblings, share a large percentage of their genetic information, so by helping relatives, you are indirectly passing on your genes. Inclusive fitness is a measure of the total reproductive success of an individual, including direct fitness (gained by producing your own offspring) and indirect fitness (gained by helping relatives to reproduce). Kin selection, a form of natural selection, can therefore favour genes that cause sterility in the worker caste through it’s positive effects on the reproductive success of relatives.

When eusociality first began to evolve, colonies were probably small and although the worker caste likely refrained from reproduction most of the time, they weren’t completely sterile. In small colonies, keeping your reproductive options open makes a lot of sense – if the queen dies you may have a good chance of taking over the colony and reproducing yourself. Through evolutionary time, however, colony size increased in some lineages, and it is thought this may have driven increasing specialisation and commitment of individuals to their queen and worker roles. As colony size increases, your chances for gaining any kind of direct fitness start to decrease very rapidly. As a worker it’s a much better bet to do what you can to maximise your indirect fitness benefits in large colonies, and this can be achieved by becoming increasingly specialised for your particular role. Increases in division of labour, for example, as individuals specialise more in particular tasks, may lead to increase colony efficiency and success. In turn, this may allow for the evolution of larger colonies, resulting in a positive feedback loop whereby increases in colony size lead to increases in division of labour which lead to increases in colony size, and so on. This force may have lead to the evolution of ant species with enormous colonies – over a million workers can be found in some leaf-cutter colonies!

GEE Researchers Professor Kate Jones and Henry Ferguson-Gow, along with colleagues at the University of East Anglia and the University of Bristol, produced a phylogenetic tree for the Attine Ants (a group containing over 250 species), and mapped social and environmental data onto this tree in order to test for the effects of colony size and environment on the evolution of more sophisticated division of labour. The Attini are a good group of ants to test this hypothesis in, as they show large variation in colony size and the extent of morphological divergence between the queen and worker caste.

They collected published data on social traits (colony size, worker size, queen size) and environmental conditions (daytime temperature, seasonality in temperature and precipitation) for over 600 observations of populations for 57 species of Attine ant, including every single Attine genus. Using supertree methods, they constructed a phylogeny for the attine ants, which enabled them to control for evolutionary relationships and to estimate the speed at which evolutionary changes occurred.

Colony size ranged from 16 to 6 million individuals, with the largest colonies exhibited by the fungus growing leaf-cutter ants Atta and Acromyrmex. The authors found that increases in colony size through evolution are strongly associated with increases in both worker size variation (representing division of labour within the worker caste) and queen worker dimorphism (representing reproductive division of labour). Colony size showed a positve correlation with variation in size within the worker caste, and a weaker, but positive correlation with queen-worker dimorphism. Environmental factors such as temperature, rainfall and seasonality did not have any effect on colony size, indicating that climate and other environmental variables have not been an important factor in driving the evolution of increased colony size.

This study finds strong support for the size-complexity hypothesis, which suggests that during the evolution of eusociality, increases in colony size both drove and were driven by increases in division of labour and in specialisation of the queen and worker castes to their respective roles. This pattern may have also occurred during other major transitions in evolution, such as the evolution of multicellularity, which shares many similarities with the evolution of eusociality (e.g closely related group members, division of labour). The relationship between group size and complexity may therefore have been a crucial force in the evolution of complex life, and in the major evolutionary innovations that have generated the diversity of life we see today.

Original Article:

() Science

nerc-logo-115

This research was made possible by funding from the Natural Environment Research Council (NERC).

Understanding Catfish Colonisation and Diversification in The Great African Lakes

By Claire Asher, on 5 September 2014

Why some regions or habitats contain vast, diverse communities of species, whilst others contain only relatively few species, continues to be the subject of scientific research attempting to understand the processes and conditions that allow and adaptive radiation. The Great African Lakes exist as freshwater ‘islands’, with spectacularly high levels of biodiversity and endemism. They are particularly famous for the hyperdiverse Cichlid fish, but they are also home to diverse assemblages of many other fish, such as catfish. Recent research in GEE investigated the evolution of Clatoteine catfish in Lake Tanganyika, to investigate the forces driving evolutionary radiations in the Great African Lakes. Their results suggest that evolutionary time is of key importance to catfish radiations, with recently colonised groups showing less diversity than long-standing species.

Lake Tangayika is the World’s second largest freshwater lake, covering 4 countries in the African Rift Valley (Tanzania, the Democratic Republic of the Congo, Burundi and Zambia). It is home to the highest diversity of lake-dwelling catfish on Earth, however the evolutionary history of these catfish is not fully understood. GEE academic Dr Julia Day and PhD Student Claire Peart, in collaboration with colleagues at the Natural History Museum, London and the South African Institute for Aquatic Biodiversity, investigated the evolutionary history of nocturnal Claroteine catfishes in Lake Tangayika. This group of catfish offers an excellent opportunity to investigate the influence of different factors in evolutionary diversification, as it includes multiple genera with varying range sizes and habitat types.

The Drivers of Diversification

Previous research has suggested a number of factors that are important in enabling adaptive radiations that can produce extremely high levels of biodiversity – deep lakes that experience lots of sunlight tend to favour evolutionary diversification. Diversification is also more common for species that have had a lot of evolutionary time in which to diverge and that experience high levels of sexual selection. Interestingly, although lake depth is important, the total size of the lake does not appear to be so important for diversification. A large geographical area to diversify into may influence the duration of adaptive radiations, however, with river-dwelling species showing more consistent species-production through time. This data suggests that adaptive radiations may be, to some extent, predictable, however much previous work has focussed on key model groups such as the Cichlid fish, and these hypotheses need to be generalised to other species and locations.

Molecular Phylogeny of Claroteine Catfish,
showing independent colonisation of
Chrysichthys brachynema

The authors sequenced nuclear and mitochondrial genes from 85 catfish covering 10 of the 15 species of Claroteine catfish, in order to construct an evolutionary tree for the sub-family. Estimates of the relationships between species and the evolutionary timescales of colonisation and divergence allowed the authors to distinguish between the possibilities of single or multiple colonisation events, and the processes driving diversification. The results indicated that most Claroteine catfish in Lake Tangayika originate from a single colonisation of the lake between 5 and 10 million years ago, followed by evolutionary radiations to produce the variety of species present today. One species, Chrysichthys brachynema was the exception to this rule, having independently colonised the lake around 1 – 2 million years ago. This species has not shown adaptive radiation since colonisation, probably because of the relatively short time it has been present in the lake. These results support previous work that has suggested that time is an important factor in producing highly diverse species assemblages.

Original Article:

() Molecular Phylogenetics and Evolution

nerc-logo-115

This research was made possible by funding from the Natural Environment Research Council (NERC), the National Council for Scientific and Technological Development (CNPQ), the National Geographic Society, and the Percy Sladen Memorial Trust Fund.