Academic articles written using ChatGPT have found their way into the scientific literature, which is subject to a strict proofreading process. The journals Nature and Science reacted by clarifying their editorial rules. The problem, however, raises another one: the failures of the peer review process.
ChatGPT, the giant language processing model that everyone is talking about, poses immense challenges in several sectors. While schools and journalists are wondering how to react to this powerful text-generating tool, the world of scientific research is also being hit hard. In mid-January, Nature reporters reported that summaries written by ChatGPT were scientific enough to fool human researchers (63% correct detection). In addition, it appears that articles co-written by artificial intelligences have already managed to make their way into the prestigious scientific journal whose research papers, it should be remembered, must be peer-reviewed in order to be published.
Editorial rules clarified: ChatGTP is not an author
The problem is of course taken seriously by the editorial board of Nature, which recently clarified the ethical rules for using ChatGPT and related tools. First, they cannot be credited as co-authors of an article. Second, researchers who use these tools must document that use (for example in the methods or acknowledgments sections), according to Nature’s editorial team.
The journal competition Science has also updated its editorial policy, to specify that text generated by ChatGPT (or any other AI tool) cannot be used. Another precision formulated in the editorial of the editor-in-chief of the journal: an AI program cannot be an author. “Mistakes happen when editors and reviewers don’t listen to their inner skeptic or when we don’t focus on the details. At a time when trust in science is eroding, it is important that scientists recommit to paying close and meticulous attention to detail,” observes the editor of Science.
Failures of the peer review process
The issues related to the arrival of ChatGPT in the scientific world actually shine the spotlight on another problem: the frenzy of publications in the academic world. A phenomenon partly due to the need for researchers to be recognized by managing to publish articles at a sustained pace. In a recent article on the matter, the online media Slate.com analyzes how such generative AIs reveal “the insidious disease at the heart of our scientific process”. Journalist Charles Seife explains that the task of peer review, which requires considerable time and effort, is rarely compensated by publications. Adding that the number of researchers willing to undertake rigorous evaluation decreases as the volume of publications and submissions increases. As a result, the quality of peer review would have diminished. “When professors are unable to tell the difference between a student and an AI […] this shows that the discrimination process is seriously flawed”, concludes the journalist.