Artificial intelligence is increasingly being adopted in the life sciences, as scientists seek out support and alternative approaches to the time-consuming methods of traditional research. Generative AI (GenAI) tools are now routinely being implemented into R&D workflows in order to accelerate hypothesis generation, enhance data analysis and facilitate decision making.
While GenAI does have significant potential for enhancing life sciences R&D, equally, many are concerned about how the adoption of such tools might affect data privacy, regulatory compliance and more.
To learn more about this issue, Technology Networks asked experts across industry and academia one simple question: “As generative AI becomes more deeply embedded in R&D, what safeguards or practices will be most critical to ensure trust, reproducibility and acceptance of AI-driven discoveries?”
Jo Varshney, PhD. CEO and founder, VeriSIM Life.
“As generative AI becomes a deeper part of research and development, the priority must be building trust, reproducibility and acceptance from the start. Transparency is essential. Every AI-generated insight should be traceable, with clear documentation of data sources, modeling assumptions and decision logic so that others can understand and verify it.”
“Equally important is rigorous validation. Predictions must be tested against experimental and clinical results, and verified across independent datasets to confirm that they hold up under real conditions. Establishing standardized frameworks and reporting practices ensures that findings are reproducible, both within and outside the organization.”
“Finally, collaboration is key. The best outcomes occur when AI scientists, pharmacologists and regulatory experts collaborate closely to integrate technology with scientific rigor and ensure patient safety. Only by embedding these safeguards can AI discoveries become trusted, reproducible and widely accepted in the life sciences.”
Adrien Rennesson. Co-founder & CEO, Syntopia.
“As generative AI becomes more deeply integrated into R&D, transparency and openness will be essential to build trust and ensure reproducibility. Sharing not only results but also the underlying data, methods and assumptions will allow research teams to compare outcomes, validate models and challenge findings constructively. This collective scrutiny is key to turning AI-driven discoveries into accepted scientific advances.”
“At Syntopia, we believe that generating high-quality, well-characterized datasets and promoting transparent, comparable methodologies across platforms are critical steps. Such practices will accelerate the adoption of AI in drug discovery and help unlock its full potential.”
Anna-Maria Makri-Pistikou. COO, managing director & co-founder at Nanoworx.
“To ensure trust, reproducibility and acceptance of AI-driven discoveries in R&D, critical safeguards include:
1. Rigorous validation of AI outputs: Validation is a cornerstone for building trust in AI-driven outcomes. AI models, including generative ones, can propose novel solutions, but these outputs must be empirically tested to confirm their efficacy, safety and performance.
2. Transparent data management: AI-driven R&D must be supported by meticulous data management practices, including detailed documentation of datasets, model parameters and decision-making processes.
3. Strict adherence to regulatory standards: AI-driven discoveries must align with established regulatory and industry standards to gain acceptance, especially in biotech and pharmaceuticals. This includes compliance with guidelines from regulatory bodies such as the European Medicines Agency or the United States Food and Drug Administration.
4. Human-in-the-loop oversight: While AI can accelerate discovery, human expertise remains essential to interpret results, assess biological relevance and make context-aware decisions. A pragmatic approach to engaging with generative AI in R&D should include human supervision.
5. Bias mitigation: AI systems can inadvertently introduce biases or produce unreliable predictions if trained on incomplete or skewed datasets. To counter this, R&D teams must use diverse and high-quality datasets.
6. Open collaboration and peer-review: Acceptance of AI-driven discoveries grows when findings are shared and scrutinized by the broader scientific community; just as in traditional research, experimentation and throughout patent processes.
7. Protecting the confidentiality of data used to train AI: Data in the biotech and pharmaceutical industries is often confidential, proprietary or sensitive (e.g., patient-specific). However, this same data often holds extremely valuable trends and is prime input for training AI models. Therefore, we must aim for a careful balance between making best use of available data for training AI models, while insisting on sufficient protections to the confidentiality of such underlying data.”
Faraz A. Choudhury. CEO & co-founder, Immuto Scientific.
“Transparency and validation are key. Models must be trained on high-quality, well-annotated data and paired with clear documentation of assumptions and decision pathways. Human-in-the-loop review, rigorous benchmarking against experimental data and open reproducibility standards will be essential to build confidence in AI-generated insights.”
Peter Walters. Fellow of advanced therapies, CRB.
“I think the key with AI, given where we currently are with the technology, is that it is good at rapidly coming close to the target. It will still require knowledgeable professionals to take that AI product and perform final adjustments, confirm and quality check to make it final. I see in R&D applications where AI helps those key personnel do their jobs faster and more focused, but the final product still rests squarely in their hands.
Mathias Uhlén, PhD. Professor of microbiology at the Royal Institute of Technology (KTH), Sweden.
“It is essential to develop new legal frameworks to handle sensitive medical data within the new era of AI-based analysis.”
Sunitha Venkat. Vice-president of data services and Insights, Conexus Solutions.
“Trust in AI-driven discoveries hinges on transparency, reproducibility and continuous validation. Organizations must document the entire AI lifecycle – data sources, preprocessing steps, model architectures, training parameters and assumptions – to ensure results can be independently verified. Embedding AI governance frameworks and establishing an AI Governance Council are essential to define and enforce standards for model development, version control, explainability and ethical use.”
“Cross-functional oversight is equally critical. Collaboration among scientists, data scientists, clinicians and regulatory experts ensures that AI-driven findings are scientifically sound, interpretable and compliant with evolving regulatory expectations.”
