Preface
-
Published:10 Jan 2014
-
Quantitative Proteomics, ed. C. E. Eyers and S. Gaskell, The Royal Society of Chemistry, 2014, pp. P005-P007.
Download citation file:
Understanding the roles and regulation of biological macromolecules in controlling cellular function is a classic tenet of biochemistry. Strategies to characterise the amount, stability and modification status of proteins, the macromolecular ‘workhorses’ of the cell, are of interest both to enhance our basic understanding of how cells function and communicate and also as a marker of when cellular processes have gone awry. The advent of mass spectrometry (MS)-based proteomics in the 1990s enabled protein characterisation on a much larger scale than had previously been possible with studies primarily focussed on qualitative investigations, such as identifying protein binding partners and defining gross changes in cellular composition. The rapid development in more recent years of high specification instrumentation, both for the separation and the analysis of polypeptides, has facilitated significant improvements in the rate of acquisition and depth of coverage of the proteome. Yet, any undergraduate biochemist will be able to tell you that the protein composition of a cell is not static, but changes both as a function of cell cycle and age and, critically, in response to extracellular stimuli; in this manner cells can take advantage of nutrients and protect themselves from unwanted stresses. Taking a snapshot of the protein composition of a system is therefore not nearly as important as being able to monitor quantitative changes in the proteome of that system over time or under variant conditions. However, robust methods for these types of in-depth quantitative experiments have lagged and increasingly sophisticated strategies have needed to be developed in order to assess these quantitative changes with the required degree of accuracy and precision. Sometimes the subtleties of the biological changes can be masked by the quantitative strategy employed; the techniques used therefore need to be selected carefully depending on the question being addressed.
All our current strategies for quantitative proteomics rely on using peptide surrogates as a read-out for protein amount, be that using isotope-labelled reference peptide standards, or employing label-free strategies that use peptide ion sampling or relative signal intensity as a measure of abundance. These strategies assume that, if we can determine peptide quantity, these numbers will function as a reliable read-out for protein amount. However, while it is currently significantly easier to quantify peptides and there are good reasons to make these assumptions, such inferences do not necessarily hold true when we start to consider changes in the levels of the key peptide surrogates due to post-translational modification. In addition to the oft-considered modification of proteins by the covalent attachment of functional groups, such as phosphate or glycans, which are known to affect protein function and/or stability (arguably the true feature of interest), the activity of native cellular proteases is often altered under disease conditions (e.g. cardiovascular disease) and will generate peptides of undefined termini that can then no longer be utilised as quantification standards. Point mutations and single-nucleotide polymorphisms, any of which may affect protein function, may also inadvertently influence such quantification strategies. It is likely therefore that, in the long term, we will move towards protein-level quantification, measuring changes in the amounts of the protein workhorses themselves rather than using a select few of their constituent peptides. However, such quantitative analysis is currently not feasible at the level required: much work will be needed to enable quantification of the proteome directly and the current peptide-based strategies are likely to be generating significant amounts of high quality data for the foreseeable future. These data will continue to be used to enhance what we know about biological systems and their regulation, and they are increasingly used instead of immunochemical-based assays to screen for biomarkers of disease.
Quantitative Proteomics aims to outline the state-of-the-art in mass spectrometry-based quantitative proteomics, describing recent advances and current limitations in the instrumentation used (Chapters 1 and 2), together with the various methods employed for generating high quality data. Strategies describing how stable isotope labelling can be applied for either relative or absolute protein quantification are detailed (Chapters 3–5), as are methods for performing quantitative analysis of proteins in a label-free manner (6–8). The utility of these strategies to understanding cellular protein dynamics are then exemplified with chapters looking at spatial proteomics (Chapter 9), dynamics of protein function as determined by quantifying changes in protein post-translational modification (Chapters 10 and 11) and protein turnover (Chapter 12). Finally, a key application of these techniques to biomarker discovery and validation is presented (Chapter 13 and 14), together with the rapidly developing area of quantitative analysis of protein-based foodstuffs (Chapter 15).
While there will undoubtedly be continued development in the area of quantitative proteomics for many years to come, particularly in the algorithms used for the analysis of the data, the key principles underlying sample preparation, data acquisition and the use of stable isotopes as quantification standards are now firmly established. As the number and type of applications to which MS-based quantitative strategies are applied expands, so new technology will be developed to meet their growing needs. Ultimately, this will further our understanding of the ever-changing proteomes of complex biological systems.
Claire E. Eyers
University of Liverpool