Chat-IRB? How Application-Specific Language Models Can Enhance Research Ethics Review

Author(s): 

Sebastian Porsdam Mann, Jiehao Joel Seah, Stephen Latham, Julian Savulescu, Mateo Aboy, Brian D. Earp

ISPS ID: 
isps25-48
Full citation: 
Porsdam Mann S, Seah JJ, Latham S, Savulescu J, Aboy M, Earp BD. Chat-IRB? How application-specific language models can enhance research ethics review. J Med Ethics. 2025 Aug 19:jme-2025-110845. doi: 10.1136/jme-2025-110845. Epub ahead of print. PMID: 40764013.
Abstract: 
Institutional review boards (IRBs) play a crucial role in ensuring the ethical conduct of human subjects research, but face challenges including inconsistency, delays, and inefficiencies. We propose the development and implementation of application-specific large language models (LLMs) to facilitate IRB review processes. These IRB-specific LLMs would be fine-tuned on IRB-specific literature and institutional datasets, and equipped with retrieval capabilities to access up-to-date, context-relevant information. We outline potential applications, including pre-review screening, preliminary analysis, consistency checking, and decision support. While addressing concerns about accuracy, context sensitivity, and human oversight, we acknowledge remaining challenges such as over-reliance on artificial intelligence and the need for transparency. By enhancing the efficiency and quality of ethical review while maintaining human judgement in critical decisions, IRB-specific LLMs offer a promising tool to improve research oversight. We call for pilot studies to evaluate the feasibility and impact of this approach.
Supplemental information: 
Publication date: 
2025
Publication type: 
Publication name: 
Discipline: 
Area of study: