Skip to Main Content
Skip Nav Destination

Toxicology data are generated on large scales by toxicogenomic studies and high-throughput screening (HTS) programmes, and on smaller scales by traditional methods. Both big and small data have value for elucidating toxicological mechanisms and pathways that are perturbed by chemical stressors. In addition, years of investigations comprise a wealth of knowledge as reported in the literature that is also used to interpret new data, though knowledge is not often captured in traditional databases. With the big data era, computer automation to analyse and interpret datasets is needed, which requires aggregation of data and knowledge from all available sources. This chapter reviews ongoing efforts to aggregate toxicological knowledge in a knowledge base, based on the Adverse Outcome Pathways framework, and provides examples of data integration and inferential analysis for use in (predictive) toxicology.

You do not currently have access to this chapter, but see below options to check access via your institution or sign in to purchase.
Don't already have an account? Register
Close Modal

or Create an Account

Close Modal
Close Modal