The use of generative artificial intelligence in research
The European Commission’s R&I Policy Brief on The Use of Generative Artificial Intelligence in Research examines how generative AI (GenAI) tools, especially chatbots like ChatGPT, are transforming scientific practice, research productivity, and academic publishing. Since OpenAI’s ChatGPT launch in November 2022, mentions of GenAI chatbots in scientific literature have increased thirteenfold, signalling an accelerating integration of AI into research workflows. The study finds that GenAI use is most concentrated in the applied sciences and ICT, but is rapidly spreading to health, economics, and social sciences.
GenAI’s benefits for research are substantial, from supporting literature reviews, data processing, and manuscript drafting to improving accessibility for non-native English speakers. Yet, this growing reliance raises new ethical, methodological, and integrity concerns. Researchers warn that pervasive AI-assisted writing may strain quality assurance systems, blur authorship boundaries, and erode trust in scholarly outputs. Despite these risks, discussions on ethics and governance remain underdeveloped: only 8% of GenAI-related research explicitly addresses ethical or integrity issues.
Publishers and academic organisations are now introducing disclosure policies, emphasising transparency, authorship criteria, and the responsible use of AI. However, implementation is fragmented and inconsistent across disciplines. The report calls for a unified framework that aligns with the EU AI Act and Living Guidelines on the Responsible Use of GenAI in Research. It recommends common definitions, harmonised ethical standards, and continuous monitoring of GenAI’s impact to safeguard scientific integrity while fostering innovation.