Taylor & Francis strives for the highest standards of editorial integrity and transparency. Editors’ and peer reviewers’ use of manuscripts in Generative AI systems may pose a risk to confidentiality, proprietary rights and data, including personally identifiable information. Therefore, editors and peer reviewers must not upload files, images or information from unpublished manuscripts into Generative AI tools. Failure to comply with this policy may infringe upon the rightsholder’s intellectual property.
Use of manuscripts in Generative AI systems may give rise to risks around confidentiality, infringement of proprietary rights and data, and other risks. Therefore, editors must not upload unpublished manuscripts, including any associated files, images or information into Generative AI tools.
Editors should check with their Taylor & Francis contact prior to using any Generative AI tools, unless they have already been informed that the tool and proposed use of the tool is authorised. Journal Editors should refer to our Editor Resource page for more information on our code of conduct.
Peer reviewers are chosen experts in their fields and should not be using Generative AI for analysis or to summarise submitted articles or portions thereof in the creation of their reviews. As such, peer reviewers must not upload unpublished manuscripts or project proposals, including any associated files, images or information, into Generative AI tools.
These policies have been triggered by the rise of generative AI* and AI-assisted technologies, which are expected to increasingly be used by content creators. These policies aim to provide greater transparency and guidance to authors, reviewers, editors, readers and contributors. Elsevier will monitor this development and will adjust or refine policies when appropriate.
Where authors use generative AI and AI-assisted technologies in the writing process, these technologies should only be used to improve readability and language of the work. Applying the technology should be done with human oversight and control and authors should carefully review and edit the result, because AI can generate authoritative-sounding output that can be incorrect, incomplete or biased. The authors are ultimately responsible and accountable for the contents of the work.
Authors should disclose in their manuscript the use of AI and AI-assisted technologies and a statement will appear in the published work. Declaring the use of these technologies supports transparency and trust between authors, readers, reviewers, editors and contributors and facilitates compliance with the terms of use of the relevant tool or technology.
Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans. Each (co-) author is accountable for ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved and authorship requires the ability to approve the final version of the work and agree to its submission. Authors are also responsible for ensuring that the work is original, that the stated authors qualify for authorship, and the work does not infringe third party rights, and should familiarize themselves with our Ethics in Publishing policy before they submit.
We do not permit the use of Generative AI or AI-assisted tools to create or alter images in submitted manuscripts. This may include enhancing, obscuring, moving, removing, or introducing a specific feature within an image or figure. Adjustments of brightness, contrast, or color balance are acceptable if and as long as they do not obscure or eliminate any information present in the original. Image forensics tools or specialized software might be applied to submitted manuscripts to identify suspected image irregularities.
The only exception is if the use of AI or AI-assisted tools is part of the research design or research methods (such as in AI-assisted imaging approaches to generate or interpret the underlying research data, for example in the field of biomedical imaging). If this is done, such use must be described in a reproducible manner in the methods section. This should include an explanation of how the AI or AI-assisted tools were used in the image creation or alteration process, and the name of the model or tool, version and extension numbers, and manufacturer. Authors should adhere to the AI software’s specific usage policies and ensure correct content attribution. Where applicable, authors could be asked to provide pre-AI-adjusted versions of images and/or the composite raw images used to create the final submitted versions, for editorial assessment.
The use of generative AI or AI-assisted tools in the production of artwork such as for graphical abstracts is not permitted. The use of generative AI in the production of cover art may in some cases be allowed, if the author obtains prior permission from the journal editor and publisher, can demonstrate that all necessary rights have been cleared for the use of the relevant material, and ensures that there is correct content attribution.
When a researcher is invited to review another researcher’s paper, the manuscript must be treated as a confidential document. Reviewers should not upload a submitted manuscript or any part of it into a generative AI tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.
This confidentiality requirement extends to the peer review report, as it may contain confidential information about the manuscript and/or the authors. For this reason, reviewers should not upload their peer review report into an AI tool, even if it is just for the purpose of improving language and readability.
Peer review is at the heart of the scientific ecosystem and Elsevier abides by the highest standards of integrity in this process. Reviewing a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI or AI-assisted technologies should not be used by reviewers to assist in the scientific review of a paper as the critical thinking and original assessment needed for peer review is outside of the scope of this technology and there is a risk that the technology will generate incorrect, incomplete or biased conclusions about the manuscript. The reviewer is responsible and accountable for the content of the review report.
Elsevier’s AI author policy states that authors are allowed to use generative AI and AI-assisted technologies in the writing process before submission, but only to improve the language and readability of their paper and with the appropriate disclosure, as per our instructions in Elsevier’s Guide for Authors. Reviewers can find such disclosure at the bottom of the paper in a separate section before the list of references.
Please note that Elsevier owns identity protected AI-assisted technologies which conform to the RELX Responsible AI Principles, such as those used during the screening process to conduct completeness and plagiarism checks and identify suitable reviewers. These in-house or licensed technologies respect author confidentiality. Our programs are subject to rigorous evaluation of bias and are compliant with data privacy and data security requirements.
A submitted manuscript must be treated as a confidential document. Editors should not upload a submitted manuscript or any part of it into a generative AI tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.
This confidentiality requirement extends to all communication about the manuscript including any notification or decision letters as they may contain confidential information about the manuscript and/or the authors. For this reason, editors should not upload their letters into an AI tool, even if it is just for the purpose of improving language and readability.
Peer review is at the heart of the scientific ecosystem and Elsevier abides by the highest standards of integrity in this process. Managing the editorial evaluation of a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI or AI-assisted technologies should not be used by editors to assist in the evaluation or decision-making process of a manuscript as the critical thinking and original assessment needed for this work is outside of the scope of this technology and there is a risk that the technology will generate incorrect, incomplete or biased conclusions about the manuscript. The editor is responsible and accountable for the editorial process, the final decision and the communication thereof to the authors.
Math AIElsevier’s AI author policy states that authors are allowed to use generative AI and AI-assisted technologies in the writing process before submission, but only to improve the language and readability of their paper and with the appropriate disclosure, as per our instructions in Elsevier’s Guide for Authors. Editors can find such disclosure at the bottom of the paper in a separate section before the list of references. If an editor suspects that an author or a reviewer has violated our AI policies, they should inform the publisher.
The ethics of generative artificial intelligence (AI) use in scientific manuscript content creation has become a serious matter of concern in the scientific publishing community. Generative AI has computationally become capable of elaborating research questions; refining programming code; generating text in scientific language; and generating images, graphics, or figures. However, this technology should be used with caution. In this editorial, we outline the current state of editorial policies on generative AI or chatbot use in authorship, peer review, and editorial processing of scientific and scholarly manuscripts. Additionally, we provide JMIR Publications’ editorial policies on these issues. We further detail JMIR Publications’ approach to the applications of AI in the editorial process for manuscripts in review in a JMIR Publications journal.
Technology tools are useful for making the scientific writing process more timely and effective. Many advances have been made in terms of the tools available to help conduct more sophisticated statistical analysis, manage references, and check grammar. Among these advances, large language model (LLMs) are neural networks trained on large corpora of textual information that can be fine-tuned to respond to natural language queries in a conversational fashion. In late 2022, OpenAI released ChatGPT, an artificial intelligence (AI) chatbot [1] that uses an LLM, which has become enormously popular and a focal point for regulatory debate in a matter of months. Since then, countless LLMs have been developed and launched for research, commercial, and other applications.
The ethics of generative AI use in scientific manuscript content creation has become a serious matter of concern in the scientific publishing community [2,3]. More generally, there are already broader calls for the regulation of AI, and LLMs in particular, in general public use [4,5]. This is because generative AI has computationally become capable of elaborating research questions; refining programming code; generating text in scientific language; and generating images, graphics, or figures. However, this technology should be used with caution. For instance, LLMs may produce errors and misleading information, especially when dealing with technical topics that they may have had limited data to train on. In the technical report released by OpenAI, it is acknowledged that Generative Pre-trained Transformer (GPT)–4 can produce biased and unreliable content [6]. Such biased output can result from inherent biases in the data on which they were trained. A recent study published in the Journal of Medical Internet Research showed that ChatGPT was able to generate a highly convincing, fraudulent scientific manuscript article in approximately 1 hour [7]. The authors used tools to detect AI-generated text (AI Detector and AI Text Classifier), and the results were inconclusive, indicating that these tools were unable to determine that the manuscript was generated by ChatGPT. Finally, the authors were able to detect mistakes in the generated article, specifically in the references, as ChatGPT generated fictitious citations. These findings reinforce the importance of having well-established regulations around the use of ChatGPT in the scientific field.
For authors of academic manuscripts, key issues of concern include the need to fact-check AI-generated content of any form (including but not limited to textual information or graphics); assign accountability for AI-generated information; and disclose transparently the use of generative AI in producing any scholarly or scientific work, especially when it impacts the meaning and content of the information submitted for potential publication [8]. For peer reviewers, additional issues pertain to the typical processing of manuscripts, wherein humans traditionally have generated peer review reports and issued editorial decisions on revising, rejecting, or accepting manuscripts. Currently, it is possible to prompt generative AI to facilitate these processes when given specific inputs and prompts as well. For editors, receiving AI-generated material in manuscripts (from authors) or in peer review reports (from peer reviewers) also warrant additional considerations.
In this editorial, we outline the current state of editorial policies on generative AI or chatbot use in authorship, peer review, and editorial processing of scientific and scholarly manuscripts. Additionally, we provide JMIR Publications’ editorial policies on these issues, with the goal of ensuring the integrity of the science published and the publishing process. We further detail JMIR Publications’ approach to the applications of AI in the editorial process for manuscripts in review in a JMIR Publications journal.
In scientific publishing, there is already historical precedent that the transparency of authorship is essential to the integrity of scientific publication [9]. Regarding AI, general consensus already states that AI cannot be a listed coauthor on a manuscript because of the inability for the AI to be accountable for the content written [2,10-13]. The lack of accountability and ability to give consent to be published as a coauthor would be consistent with not listing an AI tool as a coauthor [14]. According to Committee on Publication Ethics (COPE) guidance, “AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements” [2]. The World Associate of Medical Editors (WAME) states in their Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publication that “Chatbots cannot be authors” [11]. One examination of ChatGPT (the free version of GPT-3) against the Contributor Roles Taxonomy (CRediT) authorship criteria [15] noted that the chatbot meets only 3 of 14 criteria for authorship [16]. Unfortunately, before such widespread publisher policies and recommendations became the norm, some manuscripts and preprints have already been published that identified ChatGPT as a coauthor [13].
At JMIR Publications, early guidance in our knowledge base of editorial policies explained that authors must appropriately include a description of the use of generative AI in the conduct or reporting of scientific work; otherwise, if this information is not a part of the study design (eg, in the Methods section of a manuscript), then providing acknowledgment of the use of generative AI in writing or creating text, figures, or other content for scientific publication is required [17-19]. We welcome authors to submit relevant work to the flagship journal of JMIR Publications, the Journal of Medical Internet Research, which now has a section on generative language models (including ChatGPT), where it may be appropriate to submit work that uses such technology as a core component of the work (Table 1). If an author does not use AI to generate any portions of a submitted manuscript, it would be appropriate for the author also to provide a pertinent attestation in their cover letter on submission.
Be accountable for the content of AI-generated comments submitted in the manuscript. For example, AI-generated statements should have accompanying citations where appropriate and be fact-checked for accuracy, and generated references should be checked to ensure that they have not been hallucinated.
If generative AI was a part of the study design, include appropriate methodological detail in the Methods section of a manuscript. Describe how generative AI was used in the conduct of the scientific work in sufficient detail for a peer-reviewed publication.
If generative AI was used to generate manuscript content, then state clearly in the Acknowledgments section how and where generative AI was used. This may include but is not limited to writing or creating text, figures, or other content for scientific publication. Disclose which generative AI tool was used by attesting to its use, such as stating, “I conducted this review with the assistance of [ProductName, Version, from CompanyName, Year].”
If no generative AI was used, state in the cover letter of the submission the following: “The author(s) attest that there was no use of generative artificial intelligence (AI) technology in the generation of text, figures, or other informational content of this manuscript.”
Authors use generative AI at their own risk. Understanding the terms of use of any generative AI is recommended to understand how the content of prompts may be reused by the generative AI and the company that created it.
Such acknowledgements must be fully transparent, precise, and complete throughout the submission, editorial, and production processes and will be disclosed upon the publication of a manuscript, if accepted for publication after the disclosure has been provided [19]. In addition, we strongly recommend authors to supply their transcripts, including complete prompts and responses, in supplementary files (whether or not it is published) as exemplified in Eysenbach [20], as this serves as additional information for the peer reviewers or editor to consider in their evaluation of the manuscript.
Authors must also be cautious of the use of generative AI because of its predispositions to hallucination information and references [20-22]. Because generative AI cannot be accountable for the outputs and possible hallucinations that they generate in response to a prompt, authors are accountable for fact- and reference-checking any references suggested by a generative AI tool. Authors must also be cautious of the potential for unintentional plagiarism (because the AI may not be able to properly source or cite literature) [23] or overt AI plagiarism (the authors passing off or taking credit for the production of statements that were generated by AI). Either form of plagiarism is deemed not acceptable and would be examined carefully in accordance with COPE guidance [24]. Authors may wish to adhere to the WAME recommendation that they “specify what they have done to mitigate the risk of plagiarism, provide a balanced view, and ensure the accuracy of all their references” [11]. Furthermore, instances of suspected or potential scientific misconduct or violations of publication ethics principles, regardless of the involvement or use of generative AI, would be investigated in accordance with JMIR Publications policies, which adhere to COPE guidance.
For peer reviewers, JMIR Publications adheres to expectations similar to that for authors: specifically, peer reviewers are accountable for the content of AI-generated comments submitted in a peer review. Consequently, peer reviewers are strongly advised to still ensure that the quality and content of the peer review meet the recommended standards described elsewhere in JMIR Publications policies [25]. However, peer reviewers must remain cautious about the risks of such use, including but not limited to the perpetuation of bias and nonneutral language in AI use (eg, gender, racial, political, or other biases based on individual characteristics) [26,27] and information leakage or breaches of confidentiality [27,28] (Table 2). The latter point on the confidentiality of manuscript information warrants a more extended clarification: when authors agree to open peer review of their JMIR Publications manuscript (ie, on JMIR Preprints [29]), information leakage is of lesser concern because authors have already consented to an open peer review process, and their manuscript is publicly viewable. JMIR Publications encourages open peer review [30]. However, in some instances, authors wish to maintain a traditional, closed peer review process; in such cases, peer reviewers may risk information leakage by engaging generative AI in assisting them in the process of peer review report generation.
0 تعليقات