X Close

Immunology with numbers

Home

Ramblings of a numerate immunologist

Menu

Archive for the 'Uncategorized' Category

How do we cope with so many potential antigen-specific receptors ?

By regfbec, on 26 October 2014

Welcome back to Immunology with Numbers and apologies for a long break. Struggling to interpret high throughput T cell receptor sequence data from antigen stimulated repertoires prompted the following thoughts. As always, I would love to receive comments and especially criticism. Also, do take a moment to have a look at our new Chain/Noursadeghi web site.

The vertebrate immune system uses a combination of somatic gene rearrangements and non-template junctional editing to produce a very large number of different receptors which bind a wide variety of antigens. Recent T cell receptor and B cell receptor high throughput sequencing have allowed some preliminary estimates of the size of the potential diversity . The most thoroughly analysed datasets are those from the T cell receptor β chain. The consensus appears to be that the number of possible different human β chain sequences is in the order of 1014 or above. The α chain diversity has been studied much less. Nevertheless our own sequencing, as well as other published studies suggest that the diversity in the human α chain is in the same order of magnitude. We don’t know the rules which govern the interaction of α and β  chains, or what restrictions on such pairings exist. But, even if we assume that such rulings impose a significant restriction on the total number of possible receptors, the number of possible receptors is likely to exceed 1020. And the overall germ line diversity of antibody is likely to be of the same order of magnitude.

It is immediately apparent that these large numbers pose some interesting conceptual problems in terms of specificity. If, as an extreme example, only one single receptor (defined by a unique germline sequence for both chains) recognizes a single antigen epitope, then since an individual has only around 1012 lymphocytes, almost all individuals would be missing the relevant receptor from their repertoire and would never recognise the antigen in question. Even if an individual did have such a receptor, it would at best occur only once during recombination. The frequency of individual T cell clones can be extremely low, possibly as little as 109. Despite the relatively high motility of T cells, dendritic cells sample T cells at rates of less than 1000 cells per hour. Each dendritic cell presenting the specified antigen would therefore have to wait many hours before encountering even one relevant T cell. What might be a solution to this paradox arising from the sheer number of possible receptors ? One possibility has been discussed in some depth by Alexandra Walczak and Thierry Mora. As they and others have shown, recombination is a random but non-uniform process. Some receptors are produced at much higher frequencies than others, and indeed are produced so frequently that they are apparently expressed in all individuals (so called public clones). In their most recent publication they suggest that recombination machinery, as well as the germline sequences themselves, have somehow evolved to produce receptors which are naturally selected and perhaps therefore more useful than others. This idea could be extended to suggest that the commoner receptors are those which have evolved to recognise common pathogen components : a sort of adaptive pattern recognition receptor.Putting aside the difficulty of envisaging a mechanism by which an apparently stochastic process which generates non-germline encoded diversity has “evolved” to favour certain sequences over others, this answer can at best account for only a small proportion of the diverse array of antigens against which the human immune system responds.

I would like to suggest that the diversity paradox can be resolved in only two inter-related but distinct ways. One approach is to increase receptor cross-reactivity at the expense of loss of specificity. If, for example, a single antigen epitope is recognised by 1010 different receptors, which are each generated with a wide range of probabilities, then even from a pool of 1020 possible receptors, each individual will have a high probability of having at least a few receptors which recognise any antigen, and the frequency of available receptors within an individual will in general be high enough that a new immune response can be generated within a few hours. The cross reactivity of the T cell receptor has been debated for some considerable time. A recent elegant paper from Christopher Garcia and Mark Davis’s laboratory demonstrate clearly that a single T cell receptor can indeed recognise millions, or even thousands of millions of different peptide targets, although interestingly the targets tend to share conserved “hot-spots” which represent conserved areas of TCR/peptide interaction. Of course, the correlate of high cross-reaction is low affinity, a well known property of T cell receptors, which is offset by the extreme sensitivity of the signalling mechanism, and the avidity factor which arises from displaying many identical receptors on the cell surface. The most common TCRs generated by the recombination machinery are likely to be of lower affinity than rarer higher affinity TCRs, but they may provide some cover while the higher affinity clones have time to find antigen and expand. Nevertheless, the overall characteristic of this highly diverse TCR repertoire will be low affinity and high cross reactivity.

The second approach is provided by somatic hypermutation, the characteristic feature of B cells and antibody. In this case, once again low precursor frequencies can be mitigated by having early responses mediated by low affinity, cross-reactive clones. This ensures that sufficient numbers of precursor cells exist for any epitope and can be recruited into an ongoing immune response within hours of exposure to an antigen. However, once these clones have been selected and expanded, somatic hypermutation provides a way that the BCR can be gradually fashioned to provide higher and higher affinity by repeated rounds of selection. Since selection only has to operate on the initial limited pool of low-affinity precursors recruited into the immune response, the initial enormous diversity of the BCR will not preclude the gradual emergence of antibodies with extremely high specificity, and very low cross-reactivity.

In conclusion, the extraordinary hypervariability built into the immune system’s receptor generating system imposes severe constraints on the ability of an individual to recognise anything at all. The evolutionary solution is to combine two sets of lymphocytes, T cells which respond at low affinity with extensive cross-reactivity and B cells which evolve gradually to produce antibody of high affinity and low cross-reactivity.

Novel insights from computational approaches to infection and immunity

By regfbec, on 4 July 2014

The UCL  Infection and Immunity Computational Hub was officially launched with a one day meeting entitled “Novel Insights from Computational Approaches to Infection and Immunity”, which took place at the Royal Free Hospital, London.  The meeting certainly highlighted the diversity of computational approaches which can be used to better understand host/pathogen interactions. But themes encapsulated the day’s presentations for me. The first was the use of genomics, and specifically high throughput sequencing, to map both geographical and  evolutionary  change (whether of viruses, bacteria, T cells or humans !) over space and time. A second theme was that simplified but intelligently designed mechanistic models (whether peptide/MHC binding, T cell homeostasis, or cytotoxic T cell killing) could provide real insight into understanding extremely complex biological systems.

The meeting was opened by Hans Stauss, the Director of the new research Institute of Immunity and Transplantation , who emphasized the central role for  computational and bioinformatics analysis in maximizing the potential of the new Institute to deliver advances in biomedical research which would have a real impact on patient care. And, on cue, personalized medicine was the theme for the first speaker , Peter Coveney (Chemistry, UCL). He emphasized that large multiscale mechanistic models of biological processes would be key to using patient-specific  data to inform clinical care, and hence deliver truly  personalized medicine. He briefly outlined two examples, the processing of HIV polyproteins, and subsequent loading of individual peptides onto MHC molecules. The molecular dynamic simulations required for the latter were computationally intensive, and needed to be replicated many times in order to have confidence in the outcome. Access to high-performance parallel grid computing, potentially using tens of thousands of cores, was an absolute requirement for such approaches. In a similar vein, Robin Callard (Institute of Child Health) integrated data-driven statistical regression, with mechanistic ODE models of T cell homeostasis to probe the recovery of the CD4 cell compartment in HIV children following antiretroviral therapy. Strikingly, delay in treatment led to a more rapid rate of recovery, but a long term deficit in CD4 T cell counts. The models clearly predicted that earlier intervention led to better long term reconstitution, putting into question the  current clinical practice of delaying anti-retroviral treatment.

Richard Goldstein (Division of Infection and Immunity, UCL and my co-organiser of the meeting) addressed the question of whether HIV viral sequencing can be used to identify transmission events and hence map the parameters of the HIV epidemic at global or individual scale. Using a large data set linking viral sequence to clinical information, he developed evolutionary Bayesian models which could be used to infer infectivity values (man-to-man, man-to-woman etc.) , and predict the most probable infection pathways within a specific outbreak. Richard argued the large data set made the inferred model parameters robust at a population levels, although stochasticity and uncertainty limited the predictive accuracy in individual cases. The results suggested that over reliance on patient derived information introduced significant systematic bias into estimates of infectivity rates in different patient groups. Vincent Plagnol explored the topical area of expression quantitative trait loci (eQTL), describing improved more statistically rigorous methods for linking known single base pair polymorphisms (SNPs) to heterogeneous levels of specific gene transcription. The strategy offers a powerful new approach for identifying functional gene modules, thus gaining insight into the mechanisms regulating the host pathogen interaction.

After lunch, I spoke about computational challenges in analysis of T cell receptor sequence sets obtained by  high throughput sequencing. The enormous diversity (up to 10^14) of possible alpha and beta chains means that even genetically identical individuals will end up with a very distinct set of receptors. The challenge is to recognize antigen-dependent changes in repertoire against this enormous amount of “molecular noise”. I discussed the analogous problem of automatically identifying objects in images or  sentences in texts, and outlined an approach which deconstructs TCR CDR3 sequences into a series of overlapping  amino acid triplets, and then counts the frequency of clusters of similar triplets in immunized or unimmunized mice. The approach, borrowed from the “bag-of-words”  machine learning algorithm showed promise in distinguishing repertoires of T cells isolated from mice at different times post-immunization.

I was followed by Nick Thomson (Sanger Institute and London school of Tropical Medicine and Hygiene). He discussed the application of bacterial  genomics to map both the global distribution and local transmission routes of Shigella and Cholera. The enormous current political and media interest in the spread of antibiotic-resistance in human bacterial pathogens made this a particularly timely topic. Francois Balloux (Institute of Human Genetics) returned to evolutionary questions, discussing the challenge of reconstructing either human migration patterns or the spread of microbial pathogens from genomic sequence data. Despite showing some beautiful dynamic images modeling the global spread of the human population through the ages, François ended with a note of caution, emphasizing that temporal as well as spatial data series were required to make robust inferences of migratory or transmission patterns. A lively discussion between Richard and Francois ensued and was eventually adjourned for continuation at a later date over a drink ! The final UCL talk was given by Alexei Zaikin (Mathematics and Women’s Health, UCL) who asked whether intracellular regulatory circuits could give rise to intelligent behavior. He outlined several examples of such molecular intelligences, including a fascinating genetic implementation of Pavlovian conditioning. He then went on to show how, counter intuitively, the introduction of “noise” (stochastic variation in the signal) could under certain circumstances improve the reliability of the decision making process. For those of us who don’t like noise, its worth mentioning that in all cases too much noise utterly destroyed intelligent behavior !

The final key note talk was given by Rob de Boer, Director of the Institute for Biodynamics and Biocomplexity, Utrecht University, The Netherlands. His elegant talk addressed the surprising observation that, as HIV infection progresses the rate of mutational escape decreases, and the apparent benefit in terms of increased viral replication diminishes. This data had led some to the provocative suggestion that cytotoxic T cell immune control of HIV becomes unimportant as infection progresses. Using a simplified but beautiful model of a multi-epitope immune response, Rob showed that the observed changes arose naturally from the fact that the more epitopes are involved in a response, the less important is each epitope individually. Far from implying that CTL responses were unimportant, the breadth of the response as well as the magnitude turn out to be key to long term control of this virus.

The meeting was closed by Judy Breuer, head of the Department of Infection, UCL, who thanked all the speakers and reiterated the commitment of the Division of Infection and Immunity to strengthening the research in the computational arena. It was a long but fascinating day, and reinforced the extraordinary breadth of high quality computational biology already going on at UCL and the LSTHM. The signs are this area is going to be one of continued growth for some time to come !

The launch meeting of the UCL immunity and infection Computational hub

By regfbec, on 13 June 2014

The launch meeting of the UCL Infection and Immunity Computational Hub will take place at the Atrium, Royal Free Hospital on Monday June 30th. The motivation behind establishing the Computational Hub is to bring together the considerable strengths in computational immunology and host/pathogen interactions across the UCL campus. We believe that this will facilitate future interactions and recruitment, act as the catalyst for new collaborations and grant proposals and  raise the profile of this cutting edge discipline. Our keynote speaker, Rob de Boer, is the director of Institute for Biodynamics and Biocomplexity at the University of Utrecht, and a leading international figure in computational immunology.

A few remaining places are available at the meeting. Please register here

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Download

Quantitative Immunology 2014 Les Houches

By regfbec, on 16 March 2014

Jamie Heather, Katharine Best and I are on our way back from qImmunology, a great meeting dedicated specifically to the small but growing band of people determined to bring numbers into immunology. The meeting took place at L’Ecole de Physique, a remote “retreat” outsize the small village of Les Houches, originally established to host summer schools in physics. Set in the most beautiful alpine scenery imaginable, the meeting was small, lively , interactive and genuinely interdisciplinary (including, for example, the experimental physics of skiing, see fig inset

Benny Chain skiing

Benny Chain skiing

). I summarise below just a few of my favourite talks: but please feel free to comment, correct  and add your own favourites to this page. I apologise to all those I don’t mention : there was just too much good science to cover here !

Anton Zilman discussed a mechanistic (mass action) model of interferon signalling. He discussed the slightly bizarre fact that there are over a dozen different type I interferons which all bind to the same receptor. The suggestion is that alpha and beta interferons have a different effect : certainly alpha is used to treat viral hepatitis, while beta is used to slow the progression of multiple sclerosis. Alpha and beta have different receptor affinities, and also interact in a complex way in inducing refractoriness to each other. The model proposes that a molecule of interferon binds either of two receptors, followed by binding to the other, and then inducing JAK/STAT phosphorylation and downstream signalling. Even with this simple scenario, the number of possible pathways, and hence rate parameters was too large (a common theme throughout the week) and had to be drastically simplified. An interesting property was that since agonist can bind either receptor chain, the dose response is modal, not saturating : at high concentrations two molecules of  interferon can bind the two chains independently and therefore block chain association. This also affects the maximum binding that can be achieved.

A recurrent theme running though many presentations was the attempt to capture information about protein structure by looking at “coupling” between amino acids. The idea is that the statistical relationships between pairs (or more complex local patterns) of amino acids along a protein molecule (mostly antibody or TCR, as this was an immunology conference) may give useful information when comparing sets of related proteins (homologues, paralogues, or repertoires of antibodies and TcRs). These studies were motivated either by an attempt to bound the potential receptor repertoire by antigen-driven selection (e.g. Thierry Morra, Aleksandra Walzack, Yuval Elhanati) or by developing better ways of measuring “functional distance” between molecules that go beyond conventional alignments (Olivier Rivoire, Clement Nizak). Maybe these two motivations come to the same thing. Yeast display of antibodies, allowing capture by “avidity” was a particularly nice technological idea for gathering data for these approaches (Rhys Adams).

Co-operative behaviour between T cells was the focus of Gregoire Altan-Bonnet’s talk. He described an in vitro model where two T cells of different specificity could interact and “help” each other. For example a T cell which had a low peptide affinity (affinity turned out to be a key property) , and therefore produced little IL2, and little proliferation, could be “helped” by coculture with cells with a high affinity peptide. The molecular “helper pathway” was suggested to be via PI3 kinase that could be stimulated either via TCR or via IL2 receptor signalling. The signalling pathway was modelled (pathway modelling was another common theme) to include negative and positive feedback loops on the IL2 receptor alpha chain. Tregs could interrupt this IL2 cycle by grabbing and depleting limiting amounts of IL2.

Two novel theoretical frameworks particularly interested me. The first was presented by Vassili Soumelis and related to gene transcription data. The focus was on trying to capture the interaction between two stimuli acting on the same cell : for example, TLR agonists and cytokines on plasmacytoid dendritic cells or monocytes. The idea was to classify each gene according to the pattern of response to each signal alone, and to the two signals together. For example, stimulus A induced upregulation; stimulus B induced upregulation;  but A and B together abrogated upregulation. All measured genes could then be classified semi-automatically according to 12(?) canonical patterns (a simplification from an original palette of 82). In general, responses showed multimodality : different groups of genes within the same cell showed different patterns. And, intriguingly, the ontology of the different sets of genes seemed to point to different pathways. This raised the interesting possibility that different cellular functions were being switched on in response to complex two-signal stimuli via different modalities. The approach seems to give insight into how cells might integrate complex mixtures of environmental signals.

A second was the modelling work on CD8 differentiation presented by Thomas Hoefer. The novelty here lay in the analysis of stochastic behaviour of single cells. He showed how higher moments of observed single cell data (covariances, variances etc.) could be used to enrich the dimensionality of the experimental measurements. Computationally, this could be implemented via the well-known (to physicists, apparently, although not to me) relationship between higher moments of distributions and  the derivatives of master equation generating functions. Remarkably, this approach allowed excellent multiparameter inference and confidence interval estimation based on single in vivo time point data. Extracting useful data from single cell experiments are likely to have broad application as improved technologies for single cell tracking were a dominant feature of the meeting, ranging from sophisticated in vitro microfluidic/image analysis approaches (Michal Polonsky, Ira Zaretsky , Clement Nizak) to molecular barcoding in vivo (Leila Perie).

The last evening provided two of the highlights of the week. Rob De Boer provocatively set out to prove that the death rate of productively HIV-infected CD4 T cells was independent of cytotoxic T cell killing. He addressed the apparent paradox that viral set point and infected cell death rate are apparently unrelated. He developed an ODE model which with the novel feature of incorporating many coexisting cytotoxic T cells each with specificity for a different HIV epitope. Computationally, this was achieved by incorporating a resource-competition term in which an increased viral load was needed to maintain an increased T cell immune response. And emergent property of this model was that more epitopes could lead to a stronger immune response, but the contribution of each epitope becomes less significant. His elegant model beautifully predicted altered rates of viral escape during HIV progression, and the paradoxical observation that escape gave only a minor growth advantage during the metastable phase of disease.

Paul Thomas finished by discussing his remarkably rich data set from single cell CD8 TCR repertoire analysis. Focusing predominantly on mouse flu models, he showed data which counted the total number of epitope specific T cells (MHC multimer sorted) in a whole mouse (literaly !!) before and after immunisation. The ultimate quantitative immunology ! In this model, every naïve T cell carried a unique TCR clonotype. He ended with some data on “TCR revision”, the phenomenon of persistent TCR recombination in mature T cells, which left us all wondering whether the thymic selection paradigms would have to be rewritten !

I hope I haven’t butchered these presentations and ideas too much : but if I have got it wrong, apologies and do please correct me using the comments tab. And if you would like to receive notifications of new additions to this blog, please sign up with your email address as shown. And of course, forward the link to anyone you think may be interested !

How many T cells should I use ?

By regfbec, on 19 January 2014

If you would like to receive notifications of new posts, please SUBSCRIBE (see form to right of this blog).

Welcome to this new blog where I propose to write occasional pieces related to immunology in general, and quantitative immunology in particular. I hope you will feel free to post REPLIES, COMMENTS and ESPECIALLY CRITICISMS.  The blog is meant to be an informal place to share ideas and perhaps results which are still not sufficiently well formed, complete or formal enough to merit publication through the usual peer-reviewed channels. I therefore rely on my readership, if they exist, to correct errors which will undoubtedly creep in !

I would like to start with a few thoughts on cell numbers. In conversation with Yaron Antebi, then a postdoc in Nir Freidman’s lab at the Weizmann Institute (http://www.weizmann.ac.il/immunology/NirFriedman/) where I was spending a Sabbatical a few years ago (and where I am again now, writing this), we agreed that the immunology literature, though vast, is remarkably number free. We needed a site like those collecting the physical constants of the universe (see for example http://physics.nist.gov/cuu/Constants/index.html) , but for immunologists. Perhaps there are no immunological constants and the quest for a quantitative immunology is a mirage. But for the moment, let us consider the question of cell numbers.

One of the basic experimental protocols of a cellular immunologist is to take cells from blood, spleen, lymph node or some other organ and culture these cells in vitro, in the presence of an appropriate stimulus such as an antigen, an antibody or a cytokine and measure some response of the cells: proliferation, cytokine production, change in surface phenotype etc. But how many cells should one culture, at what density and how might these relate to the situation one might observe in lymphoid tissue for example? In the section following, I focus on T cells.

Of course, lymph nodes vary widely in size. But as an approximation, lymph nodes from an immunised mouse typically have a diameter in the order of 1 mm, equivalent to a volume of about 0.5 µl. The number of T cells in one such lymph node is in the order of 5×105. So the density, or concentration of T cells in a lymph node is very high: in the order of 109 per ml! Such concentrations are never even remotely achieved by conventional in vitro culture. However, consider now the concentration of antigen specific cells that might be found in vivo. Precursor numbers for a naïve antigen (as measured so elegantly by Marc Jenkins for example (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3334329/) are probably in the order of 10-6 (1 cell in a million). Following immunisation, these frequencies rise dramatically, reaching typically  10-4 for CD4 T cells, and as high as 10-3, or even 10-2 for CD8 cells.

Now the numbers make a lot more sense ! The concentration of naïve T cells in a lymph node specific for a particular antigen approximates to  around 103 /ml (=109/106). Considered in relation to in vitro experiments, these are very low concentrations, and cytokine production in the supernatant at these cell numbers is negligible. But if an antigen-specific response comprises 0.1% of the T cell population (a high, but not unrealistic number for the peak of a response), the concentration of such antigen specific cells in the lymph node rises to 106/ml (=109/103). In the context of in vitro experiments, this is usually towards the top end of the experimental range, and stimulation of cells at this concentration can results in high levels of cytokines in the culture supernatant.

So it turns out that the range of typical in vitro cell concentrations (104-106/ml) correspond rather nicely to the range of cell concentrations which might occur in vivo in a lymph node. These calculations are based on mouse immunology. But since cell size does not scale between organisms significantly, the number of cells in a lymph node will  probably scale with the volume of the lymph node. So similar considerations probably hold in a human lymph node as well.

The relationship between these cell numbers and cytokine concentrations in a lymph node is a bit more difficult to establish. Diffusion over a range of 1 mm is going to be very fast, and probably not limiting. Cytokines are remarkably stable molecules, and spontaneous breakdown is unlikely to be significant either. Lymphatic flow will of course dramatically increase the effective volume in which the cytokine is diluted. A recent attempt to quantify lymph flow in mice using flourescent imaging (http://ajpheart.physiology.org/content/302/2/H391) suggested a rate of approximately 3 µl/min, suggesting the whole lymph node fluid volume is turned over 3 times per minute ! So cytokines produced under these conditions will rarely be able to accumulate to significant concentrations. However, after an immune response lymph flow is rapidly decreased, potentially allowing cytokine concentrations within a lymph node to rise to active levels.  An additional complexity is the possibility of high local concentrations at the site of cytokine secretion within the immunological synapse, although the extent to which the synapse structure prevents cytokine diffusion remains debatable. It seems not unreasonable that, at the peak of a response, some cytokine levels may rise above their activity threshold globally throughout a lymph node. This could have important implications in terms of bystander activation, antigen linkage and cellular cooperation.

To conclude, a rough and ready estimate of cell numbers  and tissue volumes suggests that total cell concentrations in lymphoid tissue is extremely high. It is not surprising that polyclonal activation of cells under these conditions results in pathological cytokine storms. And, reassuringly, culturing cells at concentrations similar to those of naïve precursors results in little or no measurable cytokine release. On the other hand, stimulating cells in culture at concentrations which mirror those which exist in vivo after immunisation result in biologically active levels of cytokines.

Finally, and somewhat paradoxically, in vitro experiments in which cells respond to polyclonal activation (e.g. via anti-TcR antibodies) or experiments using monoclonal TcR transgenic T cells all responding to the same epitope, capture rather well the range of cell concentrations likely to exist in lymphoid tissue in vivo. But more “realisticin vitro models looking at antigen specific responses within polyclonal T cell populations (classical recall antigen responses measured in human PBMC for example) will enormously underestimate the real T cell concentrations in vivo. It seems that studying responses of antigen specific responses in vitro will require more sophisticated models such as artificial lymph nodes or organ cultures. Or, perhaps, computational models in which raising concentrations to 10^9 is no problem at all.