Loading [Contrib]/a11y/accessibility-menu.js

Guidelines for the Use of AI Tools in Writing and Research

AIMS monitors ongoing developments in this area closely, and may review, adjust, or refine these policies as needed.

AIMS recognizes the transformative potential of AI-powered writing assistants and tools such as ChatGPT. These technologies can enhance writing and research by providing authors with fresh ideas, overcoming writer's block, and optimizing editing tasks. While these tools improve efficiency, it is crucial to understand their limitations and use them in ways that uphold academic and scientific integrity principles. As a publisher, AIMS values human creativity and authorship. Large Language Models (LLMs) cannot be credited as authors or take responsibility for the text they generate. Therefore, human oversight, intervention, and accountability are vital to ensure the accuracy and integrity of our published content. We acknowledge that many academics and scholars already use assistive and generative tools to enhance their productivity and assist in their academic writing. We have developed these guidelines to support authors submitting materials (including, but not limited to, journal articles and books) to AIMS.

The distinction between Assistive-AI tools and Generative-AI tools

Assistive-AI tools

Assistive AI tools provide suggestions, corrections, and enhancements to content you have authored. For example, tools like Google's Gmail and Microsoft's Outlook and Word have long flagged spelling or grammatical errors. More recently, these assistive tools have introduced features that proactively suggest the next word or phrase or recommend better, more concise phrasing to improve clarity. Content that you have created independently but refined or improved with the help of these Assistive-AI tools is considered "AI-assisted." We currently recommend the assistive AI programs Grammarly, Curie, and LanguageTool. We encourage authors to use Assistive-AI tools to improve the quality of the language in their content.

Generative-AI tools

This term refers to tools such as ChatGPT or Dall-e that produce content, whether in text, images, or translations. Even if you've made significant changes to the content afterward, if an AI tool was the primary creator of the content, the content would be considered "AI-generated."

Disclosure requirements

AI-assisted writing will become more common as AI tools are increasingly embedded within tools such as Microsoft Word and Google Docs. You are not required to disclose the use of assistive AI tools in your submission, but all content, including AI-assisted content, must undergo rigorous human review before submission. This is to ensure the content aligns with our standards for quality and authenticity.

You are required to inform us of any AI-generated content appearing in your work

(including text, images, or translations) when you submit any form of content to AIMS, including, but not limited to, journal articles and book proposals. This will allow the editorial team to make an informed publishing decision regarding your submission.

Things to consider before using Generative-AI tools

If you use AI to generate content or images for your submission, follow these guidelines before submitting your work to AIMS.

1. Disclosure: As outlined above, you must reveal any AI-generated content in your submission. For details, please refer to "Disclosure instructions for the use of AI tools" at the end of this document.

2. Carefully verify the accuracy, validity, and appropriateness of AI-generated content or AI-produced citations Large Language Models (LLMs) can sometimes "hallucinate" – producing incorrect or misleading information, especially when used outside of the domain of their training data or when dealing with complex or ambiguous topics. While their outputs may appear linguistically sound, they might not be scientifically accurate or correct, and LLMs may produce nonexistent citations. Remember, some LLMs might only have been trained on data up to a specific year, potentially resulting in incorrect or incomplete knowledge of a topic.

3. Carefully check sources and citations: Offer a comprehensive list of resources utilized for content and citations, including those produced by AI. Meticulously cross-check citations for their accuracy to ensure proper referencing.

4. Appropriately cite AI-generated content: Where you include content generated by AI, the appropriate citations should be included following the appropriate referencing convention.

5. Avoid plagiarism and copyright infringement: LLMs could inadvertently reproduce significant text chunks from existing sources without due citation, infringing others' intellectual property. As the work's author, you bear responsibility for confirming that there is no plagiarized content in your submission.

6. Be aware of bias: Because LLMs have been trained on text that includes biases, and because there is an inherent bias in AI tools because of human programming, AI-generated text may reproduce these biases, such as racism or sexism, or may overlook perspectives of populations that have been historically marginalized. Relying on LLMs to generate text or images can inadvertently propagate these biases so you should carefully review all AI-generated content to ensure it’s inclusive, impartial, and appeals to a broad readership.

7. Acknowledge limitations: In your submission, if you have included AI-generated content, you should appropriately acknowledge the constraints of LLMs, including the potential for bias, inaccuracies, and knowledge gaps.

8. Take responsibility: AI tools like ChatGPT cannot be recognized as a co-author in your submission. As the author, you (and any co-authors) are entirely responsible for the work you submit.

9. Stay updated: Follow the latest developments in the debates around AI-generated content to ensure you understand the possible ramifications and ethical challenges of using AI-generated content in your submission.

Prohibited use

• Do not use generative AI to create or modify core research data artificially.

• Never share sensitive personal or proprietary information on an AI platform like ChatGPT, as this may expose sensitive information or intellectual property. Any information you share with AI tools like ChatGPT is collected for business purposes.

• Editors and reviewers must uphold the confidentiality of the peer review process. Editors must not share information about submitted manuscripts or peer review reports in generative AI tools such as ChatGPT. Reviewers must not use AI tools, including but not limited to ChatGPT, to generate review reports.

These guidelines aim to ensure the responsible and ethical use of AI tools in writing and research, preserving the integrity and quality of academic and scientific publications.


Further information

• World Association of Medical Editors (WAME) recommendations on chat bots, ChatGPT and scholarly manuscripts.

• Committee on Publication Ethics (COPE)’s position statement on Authorship and AI tools.

• STM Whitepaper on Generative AI in Scholarly Communication.

Disclosure instructions for the use of Generative-AI tools

We follow COPE's guidelines and policies regarding the use of Artificial Intelligence (AI) tools: COPE Policy on AI tools

The use of artificial intelligence (AI) tools such as ChatGPT or Large Language Models in research publications is expanding rapidly. COPE joins organizations, such as WAME and the JAMA Network among others, to state that AI tools cannot be listed as an author of a paper. - COPE

AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements. - COPE

Please disclose the use of any generative-AI tools in the writing of a manuscript, the production of images or graphical elements, or the collection and analysis of data. In the "Use of AI tools declaration" we ask that you disclose which tool was used as well as a description of how the tool was used.  Authors are fully responsible for the content of their manuscript, including any portion produced by an AI tool, and are thus liable for any breach of publication ethics.

If there is nothing to disclose, there is no need to add a declaration (Remember, there is no need to disclose the use of Assistive-AI). If there is generative-AI use to disclose, here is a guide for an acceptable disclosure.

Use of Generative-AI tools declaration

The author(s) declare(s) they have used Artificial Intelligence (AI) tools in the creation of this article.

AI tools used:

How were the AI tools used?

Where in the article is the information located?