CHAPTER 7: Toxicogenomics and Toxicoinformatics: Supporting Systems Biology in the Big Data Era
-
Published:04 Dec 2019
-
Special Collection: 2019 ebook collectionSeries: Issues in Toxicology
T. M. Souza, J. C. S. Kleinjans, and D. G. J. Jennen, in Big Data in Predictive Toxicology, ed. D. Neagu and A. Richarz, The Royal Society of Chemistry, 2019, pp. 214-241.
Download citation file:
Within Toxicology, Toxicogenomics stands out as a unique research field aiming at the investigation of molecular alterations induced by chemical exposure. Toxicogenomics comprises a wide range of technologies developed to measure and quantify the '-omes (transcriptome, (epi)genome, proteome and metalobome), offering a human-based approach in contrast to traditional animal-based toxicity testing. With the growing acceptance and continuous improvements in high-throughput technologies, we observed a fast increase in the generation of ‘omics outputs. As a result, Toxicogenomics entered a new, challenging era facing the characteristic 4 Vs of Big Data: volume, velocity, variety and veracity. This chapter addresses these challenges by focusing on computational methods and Toxicoinformatics in the scope of Big ‘omics Data. First, we provide an overview of current technologies and the steps involved in storage, pre-processing and integration of high-throughput datasets, describing databases, standard pipelines and routinely used tools. We show how data mining, pattern recognition and mechanistic/pathway analyses contribute to elucidate mechanisms of adverse effects to build knowledge in Systems Toxicology. Finally, we present the recent progress in tackling current computational and biological limitations. Throughout the chapter, we also provide relevant examples of successful applications of Toxicoinformatics in predicting toxicity in the Big Data era.