science 2008-2009: 12: bewildering complexity – and reverse engineering the human
By Jon Agar, on 28 September 2009
Molecular biology has been accused of offering a reductive science of life. The old dogma – DNA is transcribed as RNA, RNA is translated into proteins – seems simple enough. But the discoveries, in the 1990s and 2000s, of many different kinds of snippets of nucleic acids, performing all sorts of functions in the cell, paint a picture of bewildering complexity.
The first reports of non-protein-coding RNA molecules were rejected as artefacts, since such RNA was expected to be rapidly broken down. But genomics techniques are now so fast that the signatures of these ncRNAs suggest a reality of hosts of these strands of nucleic acid rather than artefacts. (Notice how dependent on the state of techniques what counts as “really” in the cell is!)
The trend towards large-scale bioinformatics, combined with fine-scale biochemical investigation, is what leads to the analysis of this complexity. In particular, cancer programmes, deploying these techniques, have focussed on gene regulation, and many of types of small RNA molecules have been discovered in the course of such research.
Micro RNAs (miRNAs) are really small – roughly 23 nucleotides long – and by either breaking down messenger RNA, or interfering with how messenger RNA is translated, help fine-tune the expression of proteins. Discovered in the 1990s, miRNAs were being developed into drug therapies in the 2000s, an example being GlaxoSmithKline’s deal with Regulus Therapeutics in 2008.
But in general, pharmaceutical companies had hoped that the great genomics boom of the 1990s and early 2000s would lead to lots of promising drug targets. What they got instead was complexity. As Alison Abbott reported in Nature in 2008:
“the more that geneticists and molecular biologists have discovered, the more complicated most diseases have become. As individual genes have fallen out of favour, “systems” – multitudes of genes, proteins and other molecules interacting in an almost infinite number of ways – have come into vogue.”
The great hope of the late 2000s was this new science, “systems biology”, that tackled the complexity of the cell in ways inspired by the manner that an electrical engineer analyses a complex, black-boxed piece of electronic kit. Electrical engineers might record the responses to inputs of many different frequencies, and then make deductions about the wiring inside. Likewise, for example, systems biologists, using new technologies (microfluidics to vary osmotic pressure, flourescent cell reporters to track changes) subjected yeast cells to varying conditions, and then made deductions about biochemical pathways. Nature called it ‘reverse engineering the cell’. It is computationally very demanding. Some of the modelling ‘requires a scale of computing effort analogous to that required to predict weather and understand global warming’. There are even calls in Japan for a three-decade programme to ‘create a virtual representation of the physiology of the entire human’. Model and deduce every biochemical pathway and all its variants – reverse engineer the human body. The cost in computing bills alone is eye-watering.
Systems biology received plenty of hype and attention as a potential ‘saviour of the failing research and development pipelines’ of a ‘crisis-ridden pharmaceutical industry’, as Abbott puts it. Giant companies such as Pfizer, AstraZeneca, Merck and Roche have all bought into the project, albeit in a suck-it-and-see manner. Genomics of the first kind – the human genome project kind – has not delivered new drugs. The companies hope that systems biology might. Either way new medicine will not come cheap.