FOREWARD
Artificial Intelligence (AI) has rapidly invaded the scientific arena, in particular, during the last year. Not a day goes by that we hear discussion about the role that this new technology will play in our scientific and regulatory activities. The debate is tight and, for aspects, vibrant; many consider AI as a potential danger for humankind leading to decrease brain skills in a world ruled by machines.
WHAT IS AI?
Artificial intelligence (AI) is defined by Wikipedia as “the intelligence of machines or software, as opposed to the intelligence of living beings, primarily of humans. It is a field of study in computer science that develops and studies intelligent machines. Such machines may be called AIs”
or
“thetheory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages” (Oxford Dictionary)
It’s divided in two sub-definitions:
Weak AI: also known as narrow AI or artificial narrow intelligence (ANI), is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. “Narrow” might be a more suitable descriptor for this type of AI as it is anything but weak: it enables some very robust applications, such as Apple’s Siri, Amazon’s Alexa, IBM watsonx™, and self-driving vehicles.
Strong AI: is made up of artificial general intelligence (AGI) and artificial super intelligence (ASI). AGI, or general AI, is a theoretical form of AI where a machine would have an intelligence equal to humans; it would be self-aware with a consciousness that would have the ability to solve problems, learn, and plan for the future. ASI, also known as superintelligence, would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn’t mean AI researchers aren’t also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman and rogue computer assistant in 2001: A Space Odyssey.
EXPERIMENTAL TOXICOLOGY AND RISK ASSESSMENT
The rapid progress of AI impacts diverse scientific disciplines, including toxicology and has the potential to transform chemical safety evaluation. Toxicology has evolved from an empirical science focused on observing apical outcomes of chemical exposure, to a data-rich field ripe for AI integration. At the beginning of its story the evaluation of the hazardous properties of a given chemical was carried out by the execution of experimental studies (frankly becoming more and more complex and complete) aimed to assess specific end-points (acute toxicity, repeated exposure target organ toxicity, reproduction, genotoxicity and carcinogenicity, etc.) and to determine the safe exposure levels (LD50, NOEL, NOAEL, etc.). In a second step, around mid ‘90s, while such experimental protocols were progressively becoming more accurate and scientific, the Risk Assessment criteria were firstly set down. The evaluation was shifting from hazard to Risk and Exposure, these last considered the pillars of the REACH regulations (2006). In parallel, a variety of “alternative methods” were strongly developed by the scientific community worldwide leading to replace the “old” animal approach. Among these, we saw a huge set up of the “in silico prediction models” (Q-SARs) which allows to get information on the potential hazard of a given chemical by using its chemical structure. To summarize, from a biological approach (the use of the animals) to a in chemico-in silico (computer based) approach. Lastly, the policy (at least in EU) to encourage new evaluation methods such as NAMS (New Approach Methodologies) and GRA (General Risk Assessment) will drive to a more integrated assessment of the chemical safety. Hence, the volume, variety and velocity of toxicological data from legacy studies (experimental studies, literature review and liability checks, high-throughput assays, sensor technologies and omics approaches) create opportunities but also complexities that AI may help address. In particular, machine learning is well suited to handle and integrate large, heterogeneous datasets that are both structured and unstructured, a key challenge in modern toxicology. AI methods like deep neural networks, large language models, and natural language processing have successfully predicted toxicity endpoints, analyzed high-throughput data, extracted facts from literature, and generated synthetic data. Beyond automating data capture, analysis, and prediction, AI techniques show promise for accelerating quantitative risk assessment by providing probabilistic outputs to capture uncertainties. AI also may enable explanation methods to unravel mechanisms and increase trust in modeled predictions.
However, there are some thoughts to be shared here!! Can AI replace the human intelligence with issues like model interpretability data biases, data reliability and transparency. Is this currently limiting the regulatory endorsement of AI? This is, in the end, the Question!
Multidisciplinary collaboration is needed to ensure development of interpretable, robust, and human-centered AI systems. Rather than just automating human tasks at scale, transformative AI can catalyze innovation in how evidence is gathered, data are generated, hypotheses are formed and tested, and tasks are performed to usher new paradigms in chemical safety assessment. But, in a word, a validation of the AI systems is absolutely needed as much as a system of rules shared at international level.
DISCUSSION
In theory and if used judiciously, AI has immense potential to advance toxicology into a more predictive, mechanism-based, and evidence-integrated scientific discipline to better safeguard human and environmental wellbeing across diverse populations. But there are some open questions/doubts regarding its applicability. Let’s summarize them:
- Having a strong validated/ruled AI systems targeted to the research goal/task
- Having the possibility to govern/manage these systems
- Having the possibility to understand which data are taken and used to get certain results (safety/hazard of a chemical)
- Will AI support the mechanism of action of a certain chemical in a certain exposure (exposure scenarios are so complex)?
- Will AI assure data confidentiality when data are property of private companies which pay for them?
This last point is/will be crucial. Experimental studies to produce robust scientific hazard data will continue to be carried out for years. EU Commission said that a sudden replace of experimental approach with alternative methods (at least for some toxicological end-points) is not expected soon (see 1). This means that companies will continue to pay testing labs to get their own studies. The results of such studies are not public but covered by confidentiality for a number of years (12 under the REACH regulation as example). No companies will be please and available to make them public using an AI tool which runs on the web or to accept the risk that some data may becoming accessible by others on the web. This is not only related to their economic value (possible cost sharing) but rather related to the scientific content of such data.
I’d like also to comment the word “judiciously”. How many tools are available on other technical supports and in other areas of human activities which are not used Judiciously? So many! May be infinite! So, it’s not a matter to give a moral value but rather to get AI systems which are designed by target and easily monitorable by humans.
CONCLUSION
What can we expect? Difficult to say! Science and technology are developing so fast to get any forecast for the future. USA and China, primarily, and then EU announced to dedicate billions to develop AI systems in the next future; a massive development is hence expected.
What I can imagine is that AI systems will help us in making a part (only a part) of our scientific job. Just to mention, to make robust data search, to make comparisons, to make selection of data based on reliability criteria. I’m not so convinced, at least in a 20-30 year period, that AI system will replace the brains of toxicologists or risk assessors. The human intervention, characterized by flexibility, creative imagination, intuition, consciousness and discernment will still be needed and firmly present.
REFERENCES AND NOTES
- Conto A., Chemistry Today journal, vol 42(1), pp.54-56, 2024.
