Skip to Main Content
Skip Nav Destination

The on-going integration of systems biology functionalities into all aspects of pharmacology and toxicology has resulted in a more network-based focus, which continues to enhance the understanding of therapeutic efficacious and adverse events both at the early and late stages of research and development. These advances have been coupled with the public availability of large datasets of information and new modeling approaches that have enhanced the ability to understand toxicological events and effects at multiple biological levels. Systems toxicology approaches are also being used in the safer design of chemicals and identification of safer alternatives, which are major parts of global green chemistry initiatives. In environmental toxicology, a major advance associated with these new efforts has been the establishment of the adverse outcome pathway concept and modeling approaches used for the identification of hazards and defining risk assessments for the large number of environmental chemicals, most with very few supporting data.

The science and practical field of toxicology has been changing dramatically over the last 15–20 years, transitioning into a more systems biology and network-based approach.1–4  Several factors have been involved, including the developing genomics era where the understanding of genetic changes has enhanced the ability to understand diseases and chemically-induced toxicities at the molecular level. The genomics era has also ushered in “omics” technologies and approaches such as transcriptomics, metabolomics, proteomics, and epigenomics, which have changed the way we view mechanisms of toxicity and the perturbation of biological systems that lead to adverse outcomes.5  These advances have been coupled with the public availability of large datasets of information and new modeling approaches that have enhanced the ability to understand toxicological events and effects at multiple biological levels.6  Since our scientific approaches, inquiries, and visions aimed at understanding toxicological events and outcomes have been broadened tremendously, this reinforces our need for new and better ways to assess toxicity and risk. The large numbers of uncharacterized chemicals already present in the environment and new chemicals that continue to enter it has required hazard and risk assessments to be made with very few data. These factors have had a major influence on the need to accelerate new approaches and move away from an overdependence on in vivo animal testing and make better use of computational, molecular, and in vitro tools.6,7  The identification of the majority of toxic events in in vivo animal toxicology studies rely on high-dose exposure to the animals and default linear extrapolation procedures,8  with the incorporation of newer technologies absent in the vast majority of animal studies. This has been considered a shortcoming in risk assessment and several weaknesses in this process include the comparative shape of the dose–response relationship after relevant levels of human exposure, whether biological and/or toxicological thresholds do in fact exist and for what toxicological endpoints, and potential population variability in response.5 

Accordingly, research in toxicology has moved into a new systems-oriented phase called systems toxicology, which involves the study of complex molecular response networks initiated by exposure (both intentional and unintentional) to chemical substances. At the heart of systems toxicology approaches are the development and usage of quantitative mechanistic models that create a predictive toxicology aspect relevant to all toxicology fields, including drug research and development and environmental research. The overall approach involves the integration of classical toxicology with the quantitative analysis of large networks of chemically-induced molecular and functional changes, which occur across multiple levels of biological organization.5  Examples of key influential events in this transition since the year 2000 include the release of human genome sequencing data including specific signal transduction domains, the development and issuance of the report Toxicity Testing in the Twenty-first Century by the National Research Council (NRC),9  which has influenced all sectors of the toxicology field, and the development and publication of the adverse outcome pathway (AOP) approach,6,10,11  which has highlighted the realities that exist as the science moves away from an overdependence on in vivo testing and makes greater use of computational, molecular, and focused in vitro tools. Additional drivers of change include the European Union (EU) report from the Scientific Committee on Health and Environmental Risks, the EU’s Registration, Evaluation, Authorisation and Restriction of Chemical Substances (REACH) program, and the International Programme on Chemical Safety (IPCS).7,12  The paradigm shift can also be seen in the drug research and development sector, but rather than focusing on drugs during late stages of development or on marketed drugs, the systems-related efforts are positioned at the front end of research, both on safer chemical design and extensive target research. While the drug industry is required to conduct animal toxicology studies by regulatory agencies and international guidelines, the major effort underway is to determine chemical liabilities early in the drug discovery pipeline, both to reduce the time and cost of failures later in the process, but also to avoid costly failures once a drug reaches the market.5  Currently, there is an International Consortium for Innovation and Quality in Pharmaceutical Development (IQ), where several pharmaceutical and biotechnology companies have created a Nonclinical to Clinical Translational Database (WG1) to allow analysis of the reliability and potential limitations of nonclinical data in predicting clinical outcomes, including the evaluation of conventional biomarkers of toxicity.13  Current screening approaches applied to the front end of drug research are described below.

The science and practice of toxicology over the past several decades have consistently used classic toxicological approaches, such as in vivo and in vitro toxicology studies, combined with predictive toxicological methodologies. The desired endpoints of the in vivo animal research efforts have been the determination of a toxic dose where a chemical could be shown to induce pathologic effects after a specified duration of treatment or exposure. Where appropriate, these studies have included the estimate of the lowest observed adverse effect level, the no observed adverse effect level, and the maximally tolerated dose (MTD).5,14  These adverse effect level estimates are traditionally used in drug research and development to predict the first dose in humans and to predict margins of safety estimates based on delivered dose and/or internal exposure from pharmacokinetic/pharmacodynamic (PK/PD) modeling with extrapolations into clinical trial subjects. By regulatory requirements, all potential drugs undergoing research and development will undergo both in vitro and in vivo studies, and, if the compound reaches the clinical trial stage successfully, data from human exposure to judge the adequacy of nonclinical data in predicting clinical outcomes. Uncertainties in these estimates include the definition of adverse, which is specific for each organ system in each study and typically determined by the study pathologist; the accuracy of cross-species extrapolations (particularly rodent-to-human); and the true definition of risk–benefit for each individual drug. However, the generation of classical toxicology data does not assure the accurate prediction of potential human toxicity. Sundqvist and colleagues15  have reported on a human dose prediction process, supplemented by case studies, to integrate uncertainties into simplified plots for quantification. Drug safety is recognized as one of the primary causes of attrition during the clinical phases of development; however, in numerous instances the actual determination of serious adverse effects only occurs after the drug reaches the market. In the United States, ∼2 million patients are affected with drug-mediated adverse effects per year, of which ∼5% are fatal.16  This places drug toxicity as one of the top five causes of death in the United States, and the costs to the health care system worldwide are estimated at US$40–50 billion per year.16  In drug development there are always risk–benefit considerations, which will weigh any potential toxicity against the benefit expected to be gained by a patient taking the drug. An example of the uncertainty of these estimates can be seen in the methods used for carcinogenicity testing and evaluation for drug approval. The design of these studies rely on high-dose exposure to animals and default linear extrapolation procedures, while little consideration is given to many of the new advances in the toxicological sciences.17  Carcinogenicity studies are typically 2-year studies in rodents conducted with three dosage groups (low, mid, and high dose) and one or two concurrent control groups. Dose levels are established from previous studies, such as 13-week toxicity studies, where a MTD has been estimated. Each group in the carcinogenicity study has 60–70 animals of each sex, and the analysis of whether there is a potential carcinogenicity concern is based on an analysis of each tumor in each tissue or organ system individually by gender; certain tumors are combined via standardized procedures for statistical analysis. The analysis uses the historical database from the laboratory where the studies are conducted to determine whether each tumor is considered common or rare, using the background incidence of 1% as the standard. Common tumors are those with a background incidence of 1% or over and rare tumors are those with a background incidence below 1%. In the statistical analysis, p-values for rare and common tumors are evaluated for pair-wise significance at 0.05 (for rare) and 0.01 (for common). The rare vs. common tumor classification is an arbitrary tumor threshold and adjustments to the specific classifications by individual tumor, which can occur from laboratory to laboratory and via analyses of different control groups, can have consequences in the overall tumor evaluation outcome.8  Applying a “weight of evidence” approach into the evaluation procedures, particularly during regulatory review, attempts to alleviate some of the uncertainties; however, after more than 50 years of on-going experience, these studies still fail to bring the 21st century mindset to carcinogenicity testing. The classic toxicological process for drug development assumes that a chemical interacts with a higher affinity to a single macromolecule (the toxicological target), and therefore a single biological pathway may be perturbed at the initial target modulation. This would be followed by downstream activation of secondary and possibly tertiary pathways that result in the tissue or organ effect as indicated by key biomarkers.2  In this concept, the magnitude of toxicological effects are related to the concentration of altered molecular targets (at the site of interest), which in turn is related to the concentration of the active form of the chemical (parent compound or metabolite) at the site where the molecular targets are located. Also included in this concept is the unique susceptibility of the organism exposed to the compound.

Predictive toxicology efforts in drug research and development involve the use of multiple sources of legacy data including data generated by chemical and pharmaceutical companies and data submitted to regulatory agencies. These efforts have led to the “data warehouse” model which includes data generated through high throughput and targeted screening, and in vitro and in vivo toxicology studies on thousands of compounds and structural analogues. In a majority of cases these data also include findings from clinical trials where an experimental drug was tested on humans.

The information is applied in a “backward” fashion to predict potential findings where data do not yet exist or where decisions are being made on new potential drug candidates. Bowes and colleagues18  have described a pharmacological profiling effort by four large pharmaceutical companies: AstraZeneca, GlaxoSmithKline, Novartis, and Pfizer. The companies suggest that ∼75% of adverse drug reactions can be predicted by studying pharmacological profiles of candidate drugs. The pharmacological screening identifies primary effects related to the intended action of the candidate drug, whereas identification of secondary effects due to interactions with targets other than the primary (intended) target could be related to off-target adverse events. The groups have identified 44 screening targets including 24 G-protein coupled receptors, eight ion channels, six intracellular enzymes, three neurotransmitter transporters, two nuclear receptors, and one kinase. These types of screening data are used in the data warehouse model, typically configured in a proprietary fashion within each company. Other collaborative efforts have been developed and data from these sources would also be incorporated.

Blomme and Will19  have reviewed the current and past efforts by the pharmaceutical industry to optimize safety into molecules at the earliest stage of drug research. They conclude that new and emerging technologies in the past two decades have had limited impact on nonclinical attrition rates associated with safety issues. In addition, they point out that front-loading series of toxicology assays to “kill early, kill often” have been challenged due to high false-positive rates and an overall low positive predictive value. The primary issue cited is the lack of information on an efficacious exposures (PK/PD) and the fact that the assays are more likely to represent hazard identification and not risk assessment. Therefore, it is suggested that these data be used as alerts rather than discontinuance criteria. In a more systems toxicology approach, a large effort is now being directed towards understanding the extent of pharmacological modulation of both precedented and unprecedented targets in relation to potential safety liabilities and developing technologies to determine achievable therapeutic windows. Blomme and Will19  discuss efforts at AbbVie and Pfizer where target safety assessments are explored. The assessments include the biology of the target, tissue expression maps, messenger RNA and proteins, human genetic data, phenotypes from genetically engineered animal models, historical data from on-going and past clinical trials targeting similar targets and associated pathways, extensive datamining via biomedical databases, and in silico simulation of the various consequences of target modulation. The majority of systems research in drug safety use specific toxicities as the starting point. Ivanov and colleagues20  discuss the use of specific methods to counteract ventricular tachyarrhythmia (VT). While the in vitro HERG potassium channel assay is used universally as a predictor of VT, this is only one mechanism of action and other targets must also be identified and explored. These researchers have used the following approach: (1) creation of VT positive and negative compound libraries; (2) in silico prediction of extensive drug–target interaction profiles of chemical libraries identifying potential VT-related targets; (3) gene ontology and pathway enrichment on these potential VT targets to elucidate potential biological processes; (4) creation of a cardiomyocyte regulatory network based on general and heart-specific signaling and regulatory pathways; and (5) simulation of the changes in the regulatory network caused by the inhibition at each node in order to define potential VT-related targets. These are the type of studies that lead to more refined in vitro and in silico assessments of potential drug adverse effects at the early stage of drug research.

Verbist and colleagues21  have outlined another type of systems toxicology proposal at Janssen involving QSTAR (quantitative structure-transcription-assay relationships) by integrating high-throughput gene expression profiling data; chemical information, particularly detailed analogue analysis; and bioassay data. Using several compounds from a single chemical scaffold targeting PDE10A, a target of pharmacological interest at Janssen, changes in tubulin gene expression were identified in a subset of compounds. Therefore a screening process was developed involving multiple cell lines, gene expression profiling, in vitro micronucleus assays, and high-content imaging to show microtubule aggregates as compared to other phenotypes. Besides the chemical series of interest, known positive and negative compounds were included in the process. This study presents a valuable proof-of-concept of how to link and potentially improve the risk assessment in early drug discovery using several technologies in a drug research systems toxicology approach.

Voutchkova and colleagues22  have outlined an extensive framework for safer chemical design using multiple data and modeling resources. These types of data generation and modeling approaches are the basis of the process for a green chemistry model for specific series of chemicals used or proposed for use as reagents, solvents, or chemical intermediates in chemical synthesis. The simplified scheme involves a model building process, where chemical structures of interest are evaluated for chemical motifs (structural alerts) known to be associated with human health or environmental hazards, chemicals are clustered into hazard categories and specific high- or higher-throughput and targeted assays are identified for each hazard category. Analogue series directly applicable to the chemistry under evaluation are prepared or obtained and screened in the relevant assays. With homologous or structurally similar series, local chemical-toxicity models can be developed, validated and incorporated into the initial computational screening process. This general method can be applied to any specific hazard, any series of chemicals, and any assay methodology. Examples of hazard categories include carcinogenicity; reproductive and developmental; mutagenicity; neurotoxicity; endocrine disruption; cardiovascular; dermatotoxicity; digestive system toxicity; hematotoxicity; hepatotoxicity; immunotoxicity; muscular toxicity; nephrotoxicity; ocular toxicity; ototoxicity; respiratory toxicity; persistence in the environment; bioaccumulative in the environment; toxic to water organisms; water contaminant; and air pollutant. Structural motif alert and expert predictions can be achieved using OpenTox,23  an open access system used to predict potential hazards from chemical structures and known chemical motifs associated with human health and environmental endpoints, and Derek from Lhasa,24  a rule-based expert system that de-convolutes a chemical structure into sub-structural fragments and addresses potential toxicity consistent with the above hazard categories. The software is also used to create specific local expert predictions from screening data. Meteor (Lhasa) predicts potential metabolites and the metabolite structures can be used in Derek predictions. This type of inquiry is highly useful for establishing basic information for rank-ordering compounds, as in early candidate selection, and in the process of safer chemical synthesis. Multiple screening approaches have been used for evaluation of chemical toxicity using high-throughput technology and multiple assays. These include the United States Environmental Protection Agency (EPA) ToxCast program25  where over 2000 chemicals have been evaluated in over 700 high-throughput assays. This is a section of the Tox21 testing program, a collaboration among EPA, the National Institutes of Health (NIH), including the National Center for Advancing Translational Sciences at the National Toxicology Program at the National Institute of Environmental Health Sciences, and the United States Food and Drug Administration. The Tox21 program involves high-throughput screening of more than 10 000 environmental chemicals and approved drugs using more than 100 assays. All data are publicly available, as discussed later. Wink and colleagues26  discuss a quantitative high-content imaging in vitro process to elucidate chemical interactions with cellular adaptive stress response pathways to gain a better insight into chemical toxicities at a phenotypic cellular level. The key to their reported technology is a panel of reporter cell lines to monitor multiple key nodes of the adaptive stress response pathways. Examples include cellular redox homeostasis, unfolded protein response, endoplasmic reticulum damage, inflammatory signaling, and DNA damage response. These assays hold the potential to be incorporated into multiple large-scale screens to evaluate health-related chemically-induced biological phenomena in drug research as well as hazard identification.

Biomarkers are typically used to define the onset, continuation, and either positive or negative characteristics of the induced biological effects of the drug (chemical) under research. Biomarkers have been classified as biomarkers of exposure, susceptibility, and outcome. The definition of biomarker as used in drug discovery and development is a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic response(s) to a therapeutic intervention.27  In pharmacological studies, where a relevant therapeutic target is identified and pursued, biomarkers are developed that correlate with the proof of concept for the drug candidate. Biomarkers are developed to show (1) that a desired modulation of the target occurs as anticipated by the chemical therapeutic; (2) that the chemical-induced target modulation produces a desired biological effect; (3) that the induced biological effect alters the disease under study; and (4) that there may be increased susceptibility to the therapeutic candidate by certain individuals, such as those based on pharmacogenetic predispositions. In toxicology studies, biomarkers are objectively measured and evaluated as indicators of (1) normal biological processes; (2) pathogenic processes; (3) pharmacologic response(s) to a therapeutic intervention, which in some cases could mean excessive or nonspecific pharmacologic activity; and (4) exposure–response relationships. Pharmacogenetic markers are also studied from a toxicological standpoint, particularly in relation to drug metabolism. In environmental research and risk assessment, biomarkers are frequently referred to as indicators of human or environmental hazards. Discovering and implementing new biomarkers for toxicity caused by exposure to a chemical from a therapeutic intervention or in some cases through unintentional exposure continues to be pursued through the use of animal models to predict potential human effects, from human studies (clinical or epidemiological) or from biobanked human tissue samples, or the combination of these approaches.27  In addition, several omics technologies such as transcriptomics, metabolomics, and proteomics have added an important aspect to biomarker research.12  More recently, epigenomics, which is the study of changes in gene activity not attributed to DNA sequence alterations, has been shown to have increased importance in disease causality research.12  These technologies and data produced, along with large datasets of high-throughput screening data, essentially changed the process of defining biomarkers. The process of discovering or inferring biomarkers through computational means involves the identification or prediction of the molecular target(s) of the chemical, which in many cases can be secondary or undesirable targets and the association of these targets with perturbed biological pathways. The integration of these approaches with the quantitative analysis of chemically induced molecular and functional changes has brought to fruition the goals originally outlined in the 2007 NRC report.9 

The concepts of environmental chemical toxicity are different to the concepts of drug toxicity, as with environmental exposures there are only risk considerations, with virtually no benefit associated with chemical exposure. Currently there are more than 80 000 chemicals on the market and/or reach the environment, and ∼2000 new chemicals are introduced each year. Unlike drugs, for the majority of these compounds, there is limited or inadequate toxicological information with which to make rational evaluations of risk.2  Where information exists or is associated from similar structures of analogous compounds, a risk assessor may arrive at various hazard reference values, such as a derived no-effect level, to estimate an acceptable level of protection of human or wildlife health and the environment.5  Depending on the context and urgency of the risk assessment, which could include the classification of a chemical in terms of hazard severity and risk management assessment, several assumptions must be made which could cause the level of uncertainty to increase.

The concept of the AOP was a necessary enhancement to the Toxicity Testing in the Twenty-first Century report, made to more adequately support ecological risk assessment.6  After the first publication in 2010, and subsequent publications by scientists at the EPA,10,11  the AOP concept and developing case studies have become a primary force in the progression of the computational systems toxicology approach in environmental risk assessment. Unlike work in drug discovery and development, which always has a confidential business information component and therefore results are not fully publically available, AOPs are developed in a fully open-access mode and are supported by publicly available databases updated by EPA scientists. The AOP concept highlights existing knowledge that links the direct molecular initiating event, of which in theory is the interaction or modulation of a molecular biomolecule or target with a xenobiotic, and an adverse outcome at a biological organization that spans multiple levels of biological organization including the following general examples from Ankley and colleagues.6  These events are outlined below.

  • (1) Macro-molecular interactions, such as receptor–ligand interactions including agonism and antagonism, DNA binding, and protein oxidation.

  • (2) Cellular responses, such as gene activation, protein production, alterations in signaling, and protein depletion.

  • (3) Organ responses, such as disrupted homeostasis, alterations in tissue development and/or function, and altered physiology.

  • (4) Organism responses, such as mortality, impaired reproductive and developmental function, and development of diseases, such as cancer.

  • (5) Population responses, such as alterations in the structure of a population and potential extinction of species within regional and global environments.

In defining the key aspects of an AOP, macro-molecular interactions (1) are considered the initiating event, which is called “anchor 1”, and the organ and population responses (4 and 5) are considered adverse outcomes at the organism or population level, collectively called “anchor 2”. The aspects of connecting the initiating events to outcomes can take various forms, depending on the chemical itself and the amount of information available, including in vivo, in vitro, and computational sources. These various linkages also help to define key assays and technologies to enhance information collection and usage for each individual AOP. Ankley and colleagues6  provide five case studies that illustrate these points. Events occurring in the information flow between the molecular initiating event and adverse outcome are called intermediate events.28  When an intermediate event represents a biological event that is necessary for an adverse outcome to occur and is quantitatively measurable, it is considered to be a key event. In a systems toxicology approach, key event relationships between adjacent molecular initiating events, key events and adverse outcomes help define alternative approaches to assess environmental hazards. In a signaling pathway, this could be upstream or downstream events that help define more suitable assays or test systems and provide a faster quantitative evaluation of potential adverse outcomes. One of the challenges in the AOP process is to define or estimate the exposure of the xenobiotic in the relevant species under consideration for risk assessment.

As discussed earlier, measurement of exposure, toxicokinetics, systemically and importantly, at the critical site of action (anchor 1) is an essential piece of information. Unlike pharmaceutical compounds, it is highly unlikely that there would ever be controlled human toxicokinetics data for industrial and environmental chemicals.29  Extrapolating toxicokinetic models from in vitro data has been used with pharmaceutical compounds, and this is termed in vitro to in vivo extrapolation methodology. These models have been used with some success for environmental and industrial chemicals using high-throughput toxicokinetics (HTTK) models. Several examples of HTTK methodology have been published, including lecture series by scientists from the United States EPA, the National Center for Computational Toxicology, and the Hamner Institutes for Health Sciences.29,30  The primary methodology is termed “reverse dosimetry”, which uses concentrations that produce bioactivity in in vitro assays to estimate doses (mg kg−1 per day) sufficient to produce steady-state plasma concentrations (Css) in µM. These approaches assume 100% bioavailability and a linear relationship between Css and dose. Another approach, called probabilistic reverse dosimetry approaches for estimating exposure distributions (PROcEED)30  uses biomarkers of exposure, such as blood and urine levels from biomonitoring studies, to model the most likely exposure concentrations—or intake doses or dose levels—experienced by the study participants that would result in the concentrations measured. These modeling procedures are considered a work in progress (as of early 2016); however, they represent a critical piece of the puzzle in chemical-associated environmental risk assessment. Certain drivers of the in vivo toxicokinetics process may be inaccurately estimated by in vitro assays such as extra-hepatic metabolism (particularly when using in vitro hepatic cell lines), secretion in bile and enterohepatic recirculation, absorption and bioavailability, and active transport in several tissues including renal and hepatic. In addition, a process to include metabolites into the high-throughput screening process for all chemicals will be a necessary part of the functional HTTK process.29 

The exposome is currently defined as the totality of all human environmental exposures (exogenous and endogenous) from conception to death.31  The National Institute of Environmental Health Sciences32  has developed a broad definition of environmental exposures, which includes chemical exposures, diet, physical activity, stress, pre-existing disease, and the use of substances that could lead to addictive consequences. The concepts of measuring all exposure events over time is certainly difficult, particularly considering the dynamic aspects of exposures leading to adverse outcomes; however, much effort is being given to establishing biomarkers related to the exposome on both a population and individual basis. These biomarkers are being evaluated for refining exposure assessments in risk assessments; providing correlations leading to exposure–disease associations, particularly in data from epidemiological studies; the potential identification of susceptible individuals or groups; using human data rather than extrapolations from animal data; and potentially identifying interventions in reducing certain exposures and/or treating the adverse outcome.32  One of the major efforts to define and understand environmental exposures is the published biomonitoring studies, National Health and Nutrition Examination Survey, from the Centers for Disease Control and Prevention National Center for Health Statistics. The Fourth National Report on Human Exposure to Environmental Chemicals with updated tables33  provides national (USA) biomonitoring data (serum and urinary levels) on 265 chemicals from subsets of the population. The website contains details of data sources and data analysis, interpretation of report data, and chemical and toxicological information. Bell and Edwards34  have described a workflow, a frequent itemset mining approach, to identify relationships between chemicals and health biomarkers and disease. Currently, the most complete information source for toxicology information and exposure identification, including the exposome, is the Toxin and Toxin-Target Database31  (T3DB; www.t3db.ca). The details of T3DB are discussed in Chapter 2.

Systems pharmacology is defined as a translational science that aims to examine all the biological activities in the body related to internal exposure of a drug or drug candidate and the resultant drug responses and pharmacological activities.35  Systems pharmacology uses both experimental approaches and computational analyses to examine and understand drug action across multiple levels including molecular, cellular, tissue, and whole organisms36  with consideration to the presence of several interacting pathways.37  The field has grown and developed rapidly because of the emergence of omics technologies and network analysis capabilities, and the increased number of computer scientists, engineers, and mathematicians involved in addressing and solving complex biological problems.35  In an NIH white paper by the Quantitative Systems Pharmacology (QSP) workshop group in 2011, QSP was defined as providing an integrated approach to determining and understanding mechanisms of action of drugs and drug candidates in preclinical models (in vitro and in vivo) and in patients eventually receiving the drugs.38  The stated goals were to create a knowledge base to facilitate the change of complex cellular networks in pre-determined ways with mono and/or combination therapies; maximize therapeutic benefit by altering the pathophysiology of the disease being treated; and minimize toxicity.38  Given that the mammalian signaling and regulatory pathways are complex, drug–target interactions can potentially lead to adverse effects due to the propagation of signal flow to distal effectors (off-targets) in multiple cells and tissues.39  However, using complex pharmacological and toxicological network analyses, both positive and negative effects can be predicted. Zhao and Iyengar39  have identified key questions that highlight the importance of identifying and pursuing a systems pharmacology approach in drug research as a starting point: (1) what are characteristics of specific diseases where drugs modulating a single target may not provide therapeutic efficacy; (2) how do adverse events arise from intra- and intercellular networking; (3) how does the genomic status of an individual relate to potential drug efficacy particularly when poly-pharmacy (combination) is anticipated; (4) how do combinations of targets and/or signaling nodes in complex diseases predict efficacious outcomes with drug combinations; and (5) can detailed usage of the interactome and genetic status of an individual predict therapeutic efficacy or toxicity? Practically, systems pharmacology allows the application of model-based thinking during target selection and target validation before a lead compound is selected for development.40  QSP models can incorporate details of single and multiple drug plasma concentrations, systems biology models, pertinent regulatory networks and motifs of upstream and downstream loops including feedback and feedforward processes, and individual genomic and epigenetic characteristics important for individualized patient therapies.41  Visser and colleagues42  describe the use of QSP models and the creation of a flexible tool kit at Merck, which has enhanced key drug discovery and development decisions. The tool kit includes PK/PD models, disease models, comparator models, model-based meta-analysis approaches, clinical trial design simulations, criteria for quantitative decision-making, and overall performance metrics. As an example, these approaches have been used to quantify anticancer drug synergy in resistant cells and predicting effective drug combinations. Models have also been effective in predicting and understanding positive off-target activities that could require early risk–benefit considerations. These activities include endocrine disruptors, peroxisome proliferator-activated-receptor-agonists, 5-HT2B serotonin receptor agonists, and ligand-gated ion channel protein agonists.43  An example of a tool for simulation and evaluation of QSP models, is the MatVPC, which incorporates visual predictive checks as a diagnostic to evaluate the structural and stochastic parts of a QSP model.44  Biliouris and colleagues44  illustrate the use with three models: (1) a three-compartment pharmacokinetics model with oral and intravenous bolus dosing; (2) a two-compartment pharmacokinetics model with multidose intravenous infusion; and (3) a pharmacodynamics model describing the time-course of body weight. Zhang and colleagues41  describe a Sobol sensitivity analysis that determines how much of the variability in QSP models relates to each input parameter including the interactions of multiple parameters as they relate to the overall model output variability. This is a highly important aspect of QSP model building, refinement, and use, as it identifies the important and influential parameters that drive model output and, therefore, the inherent uncertainty of model predictions.

Secondary pharmacology has been described as off-target pharmacology where a drug interacts with other targets as well as the intended target, and multi-target drug research where drugs can interact effectively with multiple targets increasing the therapeutic efficacy in certain diseases.36,45–49  These effects can provide both beneficial and adverse outcomes, and in some cases these drug qualities define several adverse effects seen with drugs in development and those marketed. Liu and colleagues49  proposed a drug surveillance network for adverse drug reaction prediction through the integration of chemical compound signatures; biological targets including proteins, transporters, and enzymes, along with pathways; and phenotypic properties. Wang and colleagues45  report on a protein pharmacology interaction network database, PhIN, where users can generate interacting target networks within and across human biological pathways by defining shared chemical compounds or scaffolds using a defined activity cutoff. The database also defines interactions between human–virus and virus–virus pathways. The database contains ∼1 350 000 compounds; ∼9400 targets with more than 12 400 000 activity measurements (as of March 2015). This type of database provides information and evidence-based predictions of chemical structures that interact with multiple targets, which would be useful in multi-target drug design and side effect predictions.

A predictive pharmaco-safety network has been proposed by Cami and colleagues,50  where known drug–safety relationship networks are combined with adverse event information and detailed biological network information on several drugs as a means to predict likely but prospectively unknown adverse events. In this approach more directed surveillance programs can be instituted for drugs under development and those marketed. A multiple evidence fusion method for both approved and novel molecules was developed by Cao and colleagues51 . In this approach the authors assumed that drug behavior at different biological levels would provide predictive information on adverse effects, and that semantic relationships between adverse reactions would aid in predicting new or unknown adverse reactions for certain drugs. They also found that drug–adverse-effect networks would allow the inference of unknown associations. These evaluations used similarity measures with drug and adverse event pairs. The authors concluded that these methods are inherently beneficial especially in drug discovery during target selection, drug repositioning, and multi-target inquiry and development. In addition, the methods provide a better focus for large-scale clinical trials, and more focused post-marketing drug surveillance.50–52 

Systems biology approaches as applied to the fields of toxicology and pharmacology have increased our abilities to both visualize and understand complex chemical–biological interactions at the molecular, organ, susceptible individual, and species levels. Applying quantitative mechanistic models into a network-based analysis has not only improved our knowledge base on both chemically-induced pharmacological and toxicological effects, but also has allowed new approaches to emerge that rely less on in vivo animal testing. The ever growing abundance of databases and tools have caused the practitioners of toxicology, in particular, to step forward out of the proverbial animal toxicity box and approach solutions to problems in new ways. This has improved the understanding of adverse events, hazards, and risk assessment in all related fields: research and development of therapeutics; environmental, workplace, and household chemical exposures; and the design of safer chemicals in green chemistry endeavours.

1.
Plant
 
N.
An Introduction to Systems Toxicology
Toxicol. Res.
2015
, vol. 
4
 (pg. 
9
-
22
)
2.
Hartung
 
T.
van Vliet
 
E.
Jaworska
 
J.
Bonilla
 
L.
Skinner
 
N.
Food for thought…Systems Toxicology
ALTEX
2012
, vol. 
29
 pg. 
119
 
3.
Bai
 
J.
Abernethy
 
D.
Systems Pharmacology to Predict Drug Toxicity: Integration across Levels of Biological Organization
Annu. Rev. Pharmacol. Toxicol.
2013
, vol. 
53
 (pg. 
451
-
473
)
4.
Kongsbak
 
K.
Hadrup
 
N.
Audouze
 
K.
Vinggaard
 
A.
Applicability of Computational Systems Biology in Toxicology
Basic Clin. Pharmacol. Toxicol.
2014
, vol. 
115
 (pg. 
45
-
49
)
5.
Sturla
 
S.
Boobis
 
A.
FitzGerald
 
R.
Hoeng
 
J.
Kavlock
 
R.
Schirmer
 
K.
Whelan
 
M.
Wilks
 
M.
Peitsch
 
M.
Systems Toxicology: From Basic Research to Risk Assessment
Chem. Res. Toxicol.
2014
, vol. 
27
 (pg. 
314
-
329
)
6.
Ankley
 
G.
Bennett
 
R.
Erickson
 
R.
Hoff
 
D.
Hornung
 
M.
Johnson
 
R.
Mount
 
D.
Nichols
 
J.
Russom
 
C.
Schmieder
 
P.
Serano
 
J.
Tietge
 
J.
Villeneuve
 
D.
Adverse Outcome Pathways: A Conceptual Framework to Support Ecotoxicology Research and Risk Assessment
Environ. Toxicol. Chem.
2010
, vol. 
29
 
3
(pg. 
730
-
741
)
7.
Willett
 
C.
Rae
 
J.
Goyak
 
K.
Landesmann
 
B.
Minsavage
 
G.
Westmoreland
 
C.
Pathway-Based Toxicity: History, Current Approaches and Liver Fibrosis and Steatosis as Prototypes
ALTEX
2014
, vol. 
31
 (pg. 
407
-
421
)
8.
Johnson
 
D.
Estimating human cancer risk from rodent carcinogenicity studies: the changing paradigm for pharmaceuticals
J. Drug Metab. Toxicol.
2012
, vol. 
3
 pg. 
e114
 
10.
Villeneuve
 
D.
Crump
 
D.
Garcia-Reyero
 
N.
Hecker
 
M.
Hutchinson
 
T.
LaLone
 
C.
Landesmann
 
B.
Lettieri
 
T.
Munn
 
S.
Nepelska
 
M.
Ottinger
 
M.
Vergauwen
 
L.
Whelan
 
M.
Adverse Outcome Pathway (AOP) Development I: Strategies and Principles
Toxicol. Sci.
2014
, vol. 
142
 
2
(pg. 
312
-
320
)
11.
Villeneuve
 
D.
Crump
 
D.
Garcia-Reyero
 
N.
Hecker
 
M.
Hutchinson
 
T.
LaLone
 
C.
Landesmann
 
B.
Lettieri
 
T.
Munn
 
S.
Nepelska
 
M.
Ottinger
 
M.
Vergauwen
 
L.
Whelan
 
M.
Adverse Outcome Pathway (AOP) Development II: Best Practices
Toxicol. Sci.
2014
, vol. 
142
 
2
(pg. 
321
-
330
)
12.
Langley
 
G.
Austin
 
C.
Balapure
 
A.
Birnbaum
 
L.
Bucher
 
J.
Fentem
 
J.
Fitzpatrick
 
S.
Fowle
 
J.
Kavlock
 
R.
Kitano
 
H.
Lidbury
 
B.
Muotri
 
A.
Peng
 
S.-Q.
Sakharov
 
D.
Seidle
 
T.
Trez
 
T.
Tonevitsky
 
A.
van de Stolpe
 
A.
Whelan
 
M.
Willett
 
C.
Lessons from Toxicology: Developing a 21st Century Paradigm for Medical Research
Environ. Health Perspect.
2015
, vol. 
123
 
11
(pg. 
A268
-
A272
)
13.
Monticello
 
T.
Drug Development and Nonclinical to Clinical Translational Databases: Past and Current Efforts
Toxicol. Pathol.
2015
, vol. 
43
 (pg. 
57
-
61
)
14.
Keenan
 
C.
Elmore
 
S.
Francke-Carroll
 
S.
Kemp
 
R.
Peddada
 
S.
Pletcher
 
J.
Rinke
 
M.
Schmidt
 
S.
Taylor
 
I.
Wolf
 
D.
Best practices for the use of historical control data of proliferative rodent lesions
Toxicol. Pathol.
2009
, vol. 
37
 (pg. 
679
-
693
)
15.
Sundqvist
 
M.
Lundahl
 
A.
Någård
 
M.
Bredberg
 
U.
Gennemark
 
P.
Quantifying and communicating uncertainty in preclinical to clinical human dose-prediction
CPT: Pharmacometrics Syst. Pharmacol.
2015
, vol. 
4
 (pg. 
243
-
254
)
16.
Garcia-Serna
 
R.
Vidal
 
D.
Remez
 
N.
Mestres
 
J.
Large-scale Predictive Drug Safety: From Structural Alerts to Biological Mechanisms
Chem. Res. Toxicol.
2015
, vol. 
28
 (pg. 
1875
-
1887
)
17.
Jacobs
 
A.
Prediction of 2-year carcinogenicity study results for pharmaceutical products. How are we doing?
Toxicol. Sci.
2005
, vol. 
88
 (pg. 
18
-
23
)
18.
Bowes
 
J.
Brown
 
A.
Hamon
 
J.
Jarolimek
 
W.
Sridhar
 
A.
Waldon
 
G.
Whitebread
 
S.
Reducing safety-related drug attrition: the use of in vitro pharmacological profiling
Nat. Rev. Drug Discovery
2012
, vol. 
11
 (pg. 
909
-
922
)
19.
Blomme
 
E.
Will
 
Y.
Toxicology Strategies for Drug Discovery: Present and Future
Chem. Res. Toxicol.
2016
, vol. 
29
 
4
(pg. 
473
-
504
)
20.
Ivanov
 
S.
Lagunin
 
A.
Pogodin
 
P.
Filimonov
 
D.
Poroikov
 
V.
Identification of Drug Targets Related to the Induction of Ventricular Tachyarrhythmia through a Systems Chemical Biology Approach
Toxicol. Sci.
2015
, vol. 
145
 
2
(pg. 
321
-
336
)
21.
Verbist
 
B.
Verheyen
 
G.
Vervoort
 
L.
Crabbe
 
M.
Beerens
 
D.
Bosmans
 
C.
Jaensch
 
S.
Osselaer
 
S.
Talloen
 
W.
Van den Wyngaert
 
I.
Van Hecke
 
G.
Wuyts
 
D.
QSTAR Consortium
 , 
Van Goethem
 
F.
Göhlmann
 
H.
Integrating High-Dimensional Transcriptomics and Image Analysis Tools into Early Safety Screening: Proof of Concept for a New Early Drug Development Strategy
Chem. Res. Toxicol.
2015
, vol. 
28
 (pg. 
1914
-
1925
)
22.
Voutchkova
 
A.
Osimitz
 
T.
Anastas
 
P.
Toward a Comprehensive Molecular Design Framework for Reduced Hazard
Chem. Rev.
2010
, vol. 
110
 (pg. 
5845
-
5882
)
24.
Lhasa Derek and Meteor Suites
, http://www.lhasalimited.org
26.
Wink
 
S.
Hiemstra
 
S.
Huppelschoten
 
S.
Danen
 
E.
Niemeijer
 
M.
Hendriks
 
G.
Vrieling
 
H.
Herpers
 
B.
van de Water
 
B.
Quantitative High Content Imaging of Cellular Adaptive Stress Response Pathways in Toxicity for Chemical Safety Assessment
Chem. Res. Toxicol.
2014
, vol. 
27
 (pg. 
338
-
355
)
27.
H.
Larson
,
E.
Chan
,
S.
Sudarsanam
and
D.
Johnson
,
Biomarkers
, in
Computational Toxicology, Methods in Molecular Biology
, ed. B. Reisfeld and A. Mayeno,
Humana Press, Springer Science
,
NY
,
2013
, ch. 11, pp. 253–273
28.
Groh
 
K.
Carvalho
 
R.
Chipman
 
J.
Denslow
 
N.
Halder
 
M.
Murphy
 
C.
Roelofs
 
D.
Rolaki
 
A.
Schirmer
 
K.
Watanabe
 
K.
Development and application of the Adverse Outcome Pathway framework for understanding and predicting chronic toxicity: I. Challenges and research needs in ecotoxicology
Chemosphere
2015
, vol. 
120
 (pg. 
764
-
777
)
29.
Wambaugh
 
J.
Wetmore
 
B.
Pearce
 
R.
Strope
 
C.
Goldsmith
 
R.
Sluka
 
J.
Sedykh
 
A.
Tropsha
 
A.
Bosgra
 
S.
Shah
 
I.
Judson
 
R.
Thomas
 
R.
Setzer
 
R.
Toxicokinetic Triage for Environmental Chemicals
Toxicol. Sci.
2015
, vol. 
147
 
1
(pg. 
55
-
67
)
30.
Grulke
 
C.
Holm
 
K.
Goldsmith
 
M.-R.
Tan
 
Y.-M.
PROcEED: Probabilistic reverse dosimetry approaches for estimating exposure distributions
Bioinformation
2013
, vol. 
9
 
13
(pg. 
707
-
709
)
31.
Wishart
 
D.
Arndt
 
D.
Pon
 
A.
Sajed
 
T.
Guo
 
A.
Djoumbou
 
Y.
Knox
 
C.
Wilson
 
M.
Liang
 
Y.
Grant
 
J.
Liu
 
Y.
Goldansaz
 
S.
Rappaport
 
S.
T3DB: the toxic exposome database
Nucleic Acids Res.
2015
, vol. 
43
 (pg. 
D928
-
D934
)
32.
T.
Adler
,
K.
Sawyer
,
M.
Shelton–Davenport
and
N.
Grossblatt
,
The Exposome: A Powerful Approach for Evaluating Environmental Exposures and their Influences on Human Disease
,
Emerging Science for Environmental Health Decisions Newsletter
,
National Academies
,
2010
, Issue 2, ISSN 2376–1679
34.
Bell
 
S.
Edwards
 
S.
Identification and Prioritization of Relationships between Environmental Stressors and Adverse Human Health Impacts
Environ. Health Perspect.
2015
, vol. 
123
 
11
(pg. 
1193
-
1199
)
35.
Turner
 
R.
Park
 
B.
Pirmohamed
 
M.
Parsing interindividual drug variability: an emerging role for systems pharmacology
Wiley Interdiscip. Rev.: Syst. Biol. Med.
2015
, vol. 
7
 
4
(pg. 
221
-
241
)
36.
Berger
 
S.
Iyengar
 
R.
Network Analysis in Systems Pharmacology
Bioinformatics
2009
, vol. 
25
 
19
(pg. 
2466
-
2472
)
37.
Berger
 
S.
Iyengar
 
R.
Role of systems pharmacology in understanding adverse drug reactions
Wiley Interdiscip. Rev.: Syst. Biol. Med.
2011
, vol. 
3
 
2
(pg. 
129
-
135
)
38.
Quantitative and Systems Pharmacology in the Post-genomic Era: New Approaches to Discovering Drugs and Understanding Therapeutic Mechanisms. An NIH White Paper by the QSP Workshop Group
,
2011
, www.nigms.nih.gov/training/documents/systemspharmawpsorger2011.pdf
39.
Zhao
 
S.
Iyengar
 
R.
Systems Pharmacology: Network Analysis to Identify Multiscale Mechanisms of Drug Action
Annu. Rev. Pharmacol. Toxicol.
2012
, vol. 
52
 (pg. 
505
-
521
)
40.
Vicini
 
P.
van der Graaf
 
P.
Systems Pharmacology for Drug Discovery and Development: Paradigm Shift or Flash in the Pan?
Clin. Pharmacol. Ther.
2013
, vol. 
93
 
5
(pg. 
379
-
381
)
41.
Zhang
 
X.-Y.
Trame
 
M.
Lesko
 
L.
Schmidt
 
S.
Sobol Sensitivity Analysis. A Tool to Guide the Development and Evaluation of Systems Pharmacology Models
CPT: Pharmacometrics Syst. Pharmacol.
2015
, vol. 
4
 (pg. 
69
-
79
)
42.
Visser
 
S.
de Alwis
 
D.
Kerbusch
 
T.
Stone
 
J.
Allerheiligen
 
S.
Implementation of Quantitative and Systems Pharmacology in Big Pharma
CPT: Pharmacometrics Syst. Pharmacol.
2014
, vol. 
3
 pg. 
e142
  
, 1-10
43.
Papoian
 
T.
Chiu
 
H.-J.
Elayan
 
I.
Jagadeesh
 
G.
Khan
 
I.
Laniyonu
 
A.
Li
 
C.
Saulnier
 
M.
Simpson
 
N.
Yang
 
B.
Secondary pharmacology data to assess potential off-target activity of new drugs: a regulatory perspective
Nat. Rev. Drug Discovery
2015
, vol. 
14
 
4
pg. 
294
 
44.
Biliouris
 
K.
Lavielle
 
M.
Trame
 
M.
MatVPC: A user-friendly MATLAB-based toll for the simulation and evaluation of Systems Pharmacology models
CPT: Pharmacometrics Syst. Pharmacol.
2015
, vol. 
4
 
9
(pg. 
547
-
557
)
45.
Wang
 
Z.
Li
 
J.
Dang
 
R.
Liang
 
L.
Lin
 
J.
PhIN: A Protein Pharmacology Interaction Network Database
CPT: Pharmacometrics Syst. Pharmacol.
2015
, vol. 
4
 (pg. 
160
-
166
)
46.
Espinoza-Fonseca
 
L.
The benefits of the multi-target approach in drug design and discovery
Bioorg. Med. Chem.
2006
, vol. 
14
 (pg. 
896
-
897
)
47.
Leung
 
E.
Cao
 
Z.-W.
Jiang
 
Z.-H.
Zhou
 
H.
Liu
 
L.
Network-based drug discovery by integrating systems biology and computational technologies
Briefings Bioinf.
2013
, vol. 
14
 (pg. 
491
-
505
)
48.
Dudley
 
J.
Schadt
 
E.
Sirota
 
M.
Butte
 
A.
Ashley
 
E.
Drug discovery in a multidimensional world: systems, patterns, and networks
J. Cardiovasc. Transl. Res.
2010
, vol. 
3
 (pg. 
438
-
447
)
49.
Liu
 
T.
Lin
 
Y.
Ween
 
X.
Jorissen
 
R.
Gilson
 
M.
BindingDB: a web-accessible database of experimentally determined protein-ligand binding affinities
Nucleic Acids Res.
2007
, vol. 
35
 
suppl 1
(pg. 
D198
-
D201
Published online 2006
50.
Cami
 
A.
Arnold
 
A.
Manzi
 
S.
Reis
 
B.
Predicting Adverse Events using Pharmacological Network Models
Sci. Transl. Med.
2011
, vol. 
3
 
114
(pg. 
1
-
10
)
51.
Cao
 
D.-S.
Xiao
 
N.
Li
 
Y.-J.
Zeng
 
W.-B.
Liang
 
Y.-Z.
Lu
 
A.-P.
Xu
 
Q.-S.
Chen
 
A.
Integrating Multiple Evidence Sources to Predict Adverse Drug Reactions Based on a Systems Pharmacology Model
CPT: Pharmacometrics Syst. Pharmacol.
2015
, vol. 
4
 (pg. 
498
-
506
)
52.
Lorderbaum
 
T.
Nasir
 
M.
Keiser
 
M.
Vilar
 
S.
Hripcsak
 
G.
Tatonetti
 
N.
Systems Pharmacology Augments Drug Safety Surveillance
Clin. Pharmacol. Ther.
2015
, vol. 
97
 
2
(pg. 
151
-
158
)
Close Modal

or Create an Account

Close Modal
Close Modal