X Close

STS Observatory

Home

Menu

Archive for September, 2009

science 2008-2009: 13: the gathering storm

By Jon Agar, on 30 September 2009

In mid-2005, a wave of anxiety spread among the shapers of science policy in the United States. It was fear that, in a globalised world, other countries would soon overtake the United States as leaders in science, and consequently, so the argument went, as leaders in innovation and ultimately in economic might. The United States moved from being a net exporter of high-technology goods ($45 billion in 1990), to a net importer ($50 billion in 2001). Furthermore, because of the visa restrictions introduced after 9/11, the number of scientists migrating to the United States was dropping. At school and university, students were dropping science and engineering. The United States was becoming a less attractive place to be a scientist.

Politicians, further alarmed by a poll that recorded that 60% of scientists felt that science in the United States was in decline, asked the National Academies (the National Academy of Sciences, the National Academy of Engineering and the Institute of Medicine) to suggest, urgently, ‘ten actions…that federal policy-makers could take to enhance the science and technology enterprise so that the United States can successfully compete, prosper, and be secure in the global community of the 21st century’. They got back more than they asked for. The report, brainstormed by the elite of American science, titled Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future (2007), ran to over 500 pages and urged 20 actions.

The actions included: training more teachers of science, increasing federal investment in basic research by 10% a year for seven years, offer attractive grants to outstanding early-career scientists, set up ARPA-E, fund lots of graduate fellowships, tweak the tax incentives for research, and make sure everyone has access to broadband internet. Some of these actions were indeed implemented.

What are more significant for this ‘history of science in the twentieth century and beyond’ are the assumptions behind the panic. First, there was an argument that relates economic power to basic research. The authors were in no doubt that ‘central to [American] prosperity over the last 50 years has been our massive investment in science and technology’ (p.205). Second, the competition that most worried the politicians was the gathering forces of China and to a lesser extent the European Union countries, South Korea and India. The huge populations of China and India mean that only a small percentage need be trained scientists or engineers to more than match the aggregate number in the United States.

Nevertheless, the evidence supporting a claim that the United States is losing its leadership in the sciences is slim. What is certainly happening is a globalisation of research and development. Global companies are now very likely to tap the skills and cheap salary costs of China, India and other fast developing nations. More research and development is conducted off-shore, and the trend will continue. But this does not mean a lessening of American predominance. Indeed the United States (as is the United Kindom) is the beneficiary of this trend: foreign-funded research and development has rocketed, and in the 2000s more corporate research and development investment flowed into the United States than was sent out (p.210).

Where are you from? And how did you get to where you live today?

By ucrhrmi, on 30 September 2009

In an earlier post, Jon described the recent development of ‘personalised’ genetic testing services. These tests have become cheaper and cheaper, offering more people the chance to look in more detail at their biology than they ever had before. However, direct-to-consumer gene testing companies are not only offering a look at genetic diseases. They also tap into a enormous market that is genealogical research. For example, 23andMe offers ‘Ancestry Paintings’, showing the ‘global regions reflected in your genes’.  Even National Geographic offers ‘deep ancestry’ testing as part of its ‘National Genographic’ (NG) project, which is charting the history of human migration and genetic diversity.  While often technically imprecise, these projects tap into a powerful strand of what Dorothy Nelkin described as ‘genetic essentialism’ (here and here), a cultural image of genes as the essence of who we are. Thus adverts for NG ask:

“Where do you really come from? And how did you get to where you live today?”

As the Observer newspaper reported last week, these are now central questions, not for NG, but for immigration officers. The UK Borders Agency (UKBA), perhaps taking a lead from CSI, have announced plans for a “Human Prevalence Pilot Project”. The project will use DNA samples and isotope testing to establish where asylum seekers are ‘really’ from.

As has been described elsewhere, the UKBA is confusing ancestry or ethnicity with nationality. Even ancestry testing companies tend to limit themselves to ‘regions’ rather than ‘nations’, and national borders are rarely defined along ethnic or biological lines, not least in countries which were previously European colonies. Moreover, attempts to align biological and national identities have rarely had auspicious outcomes.

Even were genetic and national identities to somehow coincide, and even were genetic tests sufficiently precise to establish national rather than regional origin, the project ignores previous migration and that people, and not least refugees, move from one country to another.

While the UKBA state that this is a pilot project, they appear to have ignored any form of evidence in introducing the scheme. As one critic suggests, “they put 2 and 2 together to make 3 1/2”.

There is an ongoing discussion at the Science blog ScienceInsider, including some key questions. The story is also being discussed at Politigenomics and Genetic Future.

science 2008-2009: 12: bewildering complexity – and reverse engineering the human

By Jon Agar, on 28 September 2009

Molecular biology has been accused of offering a reductive science of life. The old dogma – DNA is transcribed as RNA, RNA is translated into proteins – seems simple enough. But the discoveries, in the 1990s and 2000s, of many different kinds of snippets of nucleic acids, performing all sorts of functions in the cell, paint a picture of bewildering complexity.

The first reports of non-protein-coding RNA molecules were rejected as artefacts, since such RNA was expected to be rapidly broken down. But genomics techniques are now so fast that the signatures of these ncRNAs suggest a reality of hosts of these strands of nucleic acid rather than artefacts. (Notice how dependent on the state of techniques what counts as “really” in the cell is!)

The trend towards large-scale bioinformatics, combined with fine-scale biochemical investigation, is what leads to the analysis of this complexity. In particular, cancer programmes, deploying these techniques, have focussed on gene regulation, and many of types of small RNA molecules have been discovered in the course of such research.

Micro RNAs (miRNAs) are really small – roughly 23 nucleotides long – and by either breaking down messenger RNA, or interfering with how messenger RNA is translated, help fine-tune the expression of proteins. Discovered in the 1990s, miRNAs were being developed into drug therapies in the 2000s, an example being GlaxoSmithKline’s deal with Regulus Therapeutics in 2008.

But in general, pharmaceutical companies had hoped that the great genomics boom of the 1990s and early 2000s would lead to lots of promising drug targets. What they got instead was complexity. As Alison Abbott reported in Nature in 2008:

“the more that geneticists and molecular biologists have discovered, the more complicated most diseases have become. As individual genes have fallen out of favour, “systems” – multitudes of genes, proteins and other molecules interacting in an almost infinite number of ways – have come into vogue.”

The great hope of the late 2000s was this new science, “systems biology”, that tackled the complexity of the cell in ways inspired by the manner that an electrical engineer analyses a complex, black-boxed piece of electronic kit. Electrical engineers might record the responses to inputs of many different frequencies, and then make deductions about the wiring inside. Likewise, for example, systems biologists, using new technologies (microfluidics to vary osmotic pressure, flourescent cell reporters to track changes) subjected yeast cells to varying conditions, and then made deductions about biochemical pathways. Nature called it ‘reverse engineering the cell’. It is computationally very demanding. Some of the modelling ‘requires a scale of computing effort analogous to that required to predict weather and understand global warming’. There are even calls in Japan for a three-decade programme to ‘create a virtual representation of the physiology of the entire human’. Model and deduce every biochemical pathway and all its variants – reverse engineer the human body. The cost in computing bills alone is eye-watering.

Systems biology received plenty of hype and attention as a potential ‘saviour of the failing research and development pipelines’ of a ‘crisis-ridden pharmaceutical industry’, as Abbott puts it. Giant companies such as Pfizer, AstraZeneca, Merck and Roche have all bought into the project, albeit in a suck-it-and-see manner. Genomics of the first kind – the human genome project kind – has not delivered new drugs. The companies hope that systems biology might. Either way new medicine will not come cheap.

science 2008-2009: 11: genomics, cheap and personal

By Jon Agar, on 27 September 2009

Sequencing genomes has become remarkably cheaper and quicker. The Human Genome Project took thirteen years, the involvement of nearly three thousand scientists across sixteen institutions in six countries, and a small matter of $2.7 billion, to produce a sequence in 2003. In 2008, a handful of scientists took just over four months, spending less than $1.5 million, to provide a sequence of the DNA of James Watson. The trend is expected to continue, perhaps reaching rates of a billion kilobases per day per machine after 2010. The X Prize Foundation has offered a $10 million prize for the first successful attempt to sequence 100 human genomes in ten days at under $10,000 per genome.

After his sequence had been read, James Watson was counselled by a team of experts who explained what teh implications were of the twenty detected mutations associated with increased disease risk. “It was so profound”, one expert told Nature, in a plea for further research, “how little we were actually able to say”.

That doesn’t bode well for personal genomics. As sequencing becomes cheaper, we may all become Jim Watsons. Indeed, direct-to-consumer whole-genome testing went on sale in 2007. Three companies were pioneers: 23andMe and Navigenics in California, and deCode in Iceland. On payment, these companies would cross check your DNA against a million single-point genetic polymorphisms. Critics complained that the clinical usefulness of this information was unclear, and led to unnecessarily frightened customers, and demanded regulation or restriction. Furthermore, the information meant very little unless it was placed in the context of family histories and other facts. The companies replied that individuals had the right to their own genetic knowledge. Furthermore, 23andMe’s technological and financial links to Google, which was launching its Google Health, a facility for recording personal medical histories, points to how genetic information might be interpreted in the future.

Cheaper sequencing has also aided the phylogenetic investigations of the tree of life. Draft sequences for more and more organisms were published in the 2000s. In addition to a host of micro-organisms, notable sequences published (reading like an alternative Chinese calendar) were: the model fly Drosophila melanogaster (2000), the model plant Arabidopsis thaliana (2000), rice (2002), the mouse (2002), the malaria mosquito Anopheles gambiae (2002), the model organism Neurospora crassa (2003), the wilk worm (2004), the rat (2004), the chicken (2004), the chimpanzee (2005), the dog (2005), a sea urchin (2006), the cat (2007), the horse (2007), the grape vine (2007), platypus (2008), corn (2008), sorghum (2009) and the cow (2009). Nor did organisms have to be living to give up genetic secrets. Almost complete sequences of the mammoth (2008) and the Neanderthal (2009) were salvaged from fossils.

Each of these genomes is useful within some working world. Of course the availability of a complete sequence of the human genome was of specific interest to medical researchers, including cancer scientists.

science 2008-2009: 10: big pharma and publishing science

By Jon Agar, on 27 September 2009

When big pharma appears in the courts, the effects can both open up and close down the publishing processes in science. In May 2007, for example, Pfizer found itself in a court case related to its painkillers Celebrex (celecoxib) and Bextra (valdecoxib) and demanded that the New England Journal of Medicine hand over the relevant peer reviews, along the names of reviewers and any documents of internal editorial deliberation. In November 2007, the journal handed over some documents, but not all te company wanted. In 2008, the journal dug its heels in. Pfizer’s lawyers argued that among the papers might be exonerating data vital to Pfizer’s defence. The New England Journal of Medicine’s editors said that the move to strip reviewers of their anonymity would damage peer review. Other editors, such as at Donald Kennedy, Editor-in-Chief at Science, agreed.

Elsewhere, one of the unanticipated consequences of the Vioxx (refecoxib) trial, in which accusations that the drug had serious side-effects hidden from users were rejected by the Merck company, has been to open up documents previously closed, and reveal how trials were published. In 2008, analysts of these documents, writing in the Journal of the American Medical Association, found strong suggestions of ghost writing: ‘one of the Merck-held documents list a number of clinical trials in which a Merck employee is to be author of the first draft of a manuscript’, reported Nature, but in 16 out of 20 cases the name on the finally published article was an external academic.

science 2008-2009: 9: translational research

By Jon Agar, on 27 September 2009

There’s a paradoxical relationship between the flourishing of biotechnology and the commercialisation of science. On one hand the received narrative says that the patenting of recombinant DNA techniques led to the launch of the biotech companies, the celebration of the professor-entrepreneur, and the pressure to further commercialise academic research. So the Bayh-Dole Act, which granted intellectual property rights on publicly-funded research to the universities, prompted the growth in technology transfer offices (TTOs). By 2008 there were TTOs at 230 universities in the United States; data from two years earlier records 16,000 patent applications, 697 licenses for products and 553 start-up companies. Some made generated immense income streams: the Gatorade sports drink (shared between faculty inventor Robert Cade and the University of Florida), taxol (discussed earlier), and cisplatin (a platinum compound, found to have anti-cancer properties at Michigan State University). Big funds such as Royalty Pharma specialise in acquiring biomedical patents and licensing them to manufacturers: Royalty moved the anti-convulsant pregabalin from Northwestern University to Pfizer to make Lyrica in a $700 million deal; and Royalty, again, moved filgrastim substances, which stimulated white blood cell production, from the Memorial Sloan-Kettering Cancer Centre to Amgen to make Neupogen/Neulasta in a $263 million deal.

TTOs were a model to copy. A wave of TTOs was set up in European countries in the 1990s and 2000s. In other countries, institutes with special briefs to transfer technology have been established, for example the A*STAR institutes of Singapore.

‘A multi-continental chorus of academic researchers’, notes Meredith Wadman in Nature, complain that the ‘plethora of TTOs that have sprung up…are at best a mixed blessing’. TTOs, they say, overvalue intellectual property, hoard inventions, have small stretched staff, of which the talented are poached by industry and venture capital firms, and drive the over-commercialisation of academic science. Nevertheless, the assumption is that biotech is the cause of this movement close to market.

On the other hand, an examination of biomedical science in the thirty years since the late 1970s, precisely the same period that saw the growth of the TTOs, shows that basic biomedical science has grown away from clinical application and product, and the cause of the drift was biotech. In the 1950s and 1960s, notes Declan Butler in Nature, a typical medical researcher was a physician-scientist. But with the growth of molecular biology, clinical and biomedical research began to separate. Basic biomedical researchers looked to top academic journals for credit rather than their contribution to medicine. Basic scientists also regarded unfamiliar regulation and patenting with trepidation. Clinicians, meanwhile, ‘who treat patients – and earn fees for doing so – have little time or inclination to keep up with an increasingly complex literature, let alone do research’. Furthermore, in parallel, genomics, proteomics and so on generated so many possible drug targets, on average more expensive to develop than older therapies, that the pharmaceutical firms felt overwhelmed.

In this second account, then, biotech has had the effect of driving the laboratory bench further from medical application. To counter the trend, key funding agencies have promoted offices of “translational research”, a term that first appeared in 1993 as part of the BRCA1 controversy. In teh United States, the National Insitutes of Health began funding, from 2003, Clinical and Translational Science Centers, which encouarge multidisciplinary teamwork and business-style assessment. In Britain the Wellcome Trust and the Medical Research Council followed suit. Europe was in the course of setting up, in 2008-2009, its new European Advanced Translational Infrastructure in Medicine, linking translational research in Denmark, Finland, France, Greece, Germany, Norway, the Netherlands, Italy, Spain, Sweden and the United Kingdom.

So biotech led both to and away from the market and application.

By way of contrast, a footnote: in 2008, Nature reported that Bell Labs, generator of six Nobel prizes, was pulling out of basic physics research completely.

science 2008-2009: 8: stem cells

By Jon Agar, on 27 September 2009

Stem cells have the potential of turning into any other kind of cell in the body. It is this general-purpose nature that makes them such an extraordinarily attractive object to medical scientists developing new therapies and new tests.

But until 2007, human stem cells were very hard – physically and politically – to isolate, culture, manipulate and use. Previously, the only source of stems cells was the embryo. In 1981, embryonic stem cells had been isolated in the mouse. It took seventeen years before human embryonic stem cells were found. The difficulty was not just finding the tiny cells but also discovering the methods needed to grow and culture them, that is to say to turn stem cells into stem cell-lines. 

The techniques were immediately controversial, especially, given the embryonic source of the cells, within the worldwide catholic church and the conservative, “Pro-Life”, anti-abortion movement in the United States. President George W. Bush banned the use of federal funds to pursue research on any other than stem-cell lines derived before the 9th of August, 2001. In Germany, church leaders, such as Roman Catholic Cardinal Karl Lehmann, Green politicians, and a legacy of law written in the wake of Nazi atrocities, and intense debate over two years led to a similar restriction of researchers to work only on old stem-cell lines. Only the strong support of Chancellor Gerhard Schroeder and the minister for research, Edelgard Bulmahn, secured the continuation of limited imports of stem-cell lines into Germany. Schroeder and Bulmahn argued that if Germany did not do some stem cell research then it would be done elsewhere, with no ethical restrictions and to others’ economic gain.  

Indeed, embryonic stem-cell research did indeed flourish in areas of the world with less restrictions: Japan, Britain, and the individual states, such as California ($3 billion), Massachusetts ($1 billion) and Wisconsin ($750 million), in the United States that have made up the federal short-fall in funding. (Some states, such as Louisiana and North Dakota, criminalised such science; others, such as New Jersey, found that an attempt to fund research was rejected at the polls.)

This savage pulling of research in two directions – on one hand towards visions of making the blind see and the crippled walk again, and on the other towards criminalisation and dogmatic abhorrence – explains why the announcement of a new, non-embryonic source of stem cells was met with such enthusiasm. In 2006, at Kyoto University, Shinya Yamanaka had taken ordinary mouse cells and by making four, relatively simple genetic changes had caused some of them to revert to stem cell status. Within the year, working with his postdoc student Kazutoshi Takahashi in secrecy, Yamanaka had achieved the same with human cells. (He used viral vectors to introduce clones of four genes – Oct3/4, Sox2, c-Myc and Klf4 – to the human cells.) Yamanaka published his results on what he called induced pluripotent stem (iPS) cells in November 2007. Remarkably, on the same day, a Wisconsin pioneer of stem cell methods, James Thomson of the Univesity of Wisconsin in Madison, one of the original discoverers of human embryonic stell cells, independently announced the same iPS technique.

iPS cells became a goldrush in 2008 and 2009. Universities, such as Harvard, Toronto and Kyoto speedily established new facilities; scientists switched research fields; Addgene, a Massachusetts company that sold Yamanaka’s four reprogramming vectors received 6,000 requests from 1,000 laboratories since the original announcement of the mouse technique. In Japan, where Kyoto University had delayed applying for patents, the simultaneous publication sparked a national debate on whether the nation was losing its scientific lead. Billions of yen were made available for iPS science, and a Japanese patent was rushed through the system by September 2008.

Nor does iPS cell reserach raise no ethical issues. iPS cells might be used to derive gametes from any source of cells (a celebrity’s hair?), or to clone humans (it has been done with mice) or might induce cancers rather than cures.

science 2008-2009: 7: climate change

By Jon Agar, on 26 September 2009

Polar science, with its measurements of ice melt and of the critical movements of the great Greenland and West Antarctic ice sheets, provides some of the essential clues to predicting future climate change. The political response in the 2000s was both slow and complex, with, for example, American states such as California moving in a different direction and speed to the national government. Climate change was a key in issue in the election of a new prime minister, Kevin Rudd, in Australia in 2007. In Britain, climate change and energy security were used (not least by the then chief scientific advisor to the government, David King) to justify the start of a new wave of nuclear power station construction. The strange amalgam of science and politics that marks climate change action was honoured by the award of the 2007 Nobel prize for peace jointly to the Intergovernmental Panel on Climate Change and Al Gore, the ex-vice president who had turned his powerpoint presentation into a 2006 feature film, An Inconvenient Truth.

Nevertheless, both the political will to act on climate change, as well as the large-scale technological solutions touted, were not yet strong enough in the 2000s. Carbon capture and storage (CC&S), the plan to bury carbon dioxide, a leading canidate technology, even struggled to receive funding, as the case of FutureGen illustrates. The fourth IPCC assessment report, released in 2007, was incrementally more forthright in its insistence on the reality of anthropogenic climate change than its predecessor, but probably fell short of what a mainstream scientific consensus might be. The international meeting at Copenhagen, expected to consider a replacement for the Kyoto Protocol, was regarded as crunch time.

Attempts to use the IPCC as a model for other areas where global-scale problems needed guidance from complex and disputed scientific advice ran into trouble, too, in 2008. The International Assessment of Agricultural Science and Technolog, described by a Nature editorial as ‘an ambitious 4-year US$10-million project to do for hunger and poverty what the Intergovernmental Panel on Climate Change has done for another global challenge’, was threatened by the withdrawal of Monsanto and Syngenta. At issue was the contribution biotechnology – genetically-modified organisms, specifically – could make to the mission.

science 2008-2009: 6: International Polar Year

By Jon Agar, on 26 September 2009

The large-scale organisation of science is old enough to have its established cycles. In International Polar Year took place in 1932-1933, fifty years after the first. The International Geophysical Year of 1957-1958 had been opportunistically shoe-horned in twenty-five years later, claiming space and the Antarctic for science during the Cold War freeze.

In 2007-2007, the third International Polar Year was held: 170 projects, involving more than 60 nations, at a cost of $1.2 billion. In the north, scientists measured melting permafrost. In the south, China established a new Antarctic base.

science 2008-2009: 5: another green world (and another and another)

By Jon Agar, on 26 September 2009

There are five classical planets. The eighteenth, nineteenth and twentieth centuries add one each. The detection of hordes of fainter planets in the 1990s and 2000s has had some dramatic consequences.

First, the discovery, many by Caltech astronomer Michael Brown, of a string of “trans-Neptunian objects” has forced scientists to rethink what a “planet” is. The International Astronomical Union created in 2006 a new category of “dwarf planet”, placing Pluto and Ceres with some of these new objects, now given names: Eris, Sedna (possibly), Haumea and Makemake. The demotion of Pluto is a reminder of the revisable character of even the most apparently basic or venerable of scientific categories.

Second, beginning in 1995, planets around other ordinary stars have been detected. These “exoplanets” have typically revealed themselves through the tiny wobbles, detectable with fine spectroscopic measurement, induced in the movement of their home stars. By 2008, more than 300 exoplanets had been identified, including one, HD 189733b, with a trace of water vapour in its spectrum. This discovery prompted serious speculation about “biomarkers” to look for in exoplanets.