AI Guidelines

Guidelines on the Use of Generative Artificial Intelligence (AI)

Generative Artificial Intelligence (AI) tools, including large language models (LLMs) and multimodal systems, are increasingly being utilized by the university community. These tools offer various possibilities for research assistance, including facilitating idea generation, conducting a literature review, coding, language assistance, and streamlining the research process.

This journal encourages the ethical use of Generative AI tools in research but recognizes the risks associated with their use. These risks include potential inaccuracies, lack of proper attribution, plagiarism and false accusations, and issues related to confidentiality and intellectual property. Given the rapid evolution of AI technologies, this content provides guidance on the responsible use of such tools.

Authors

Authors are fully accountable for the originality, accuracy, and integrity of their work. If authors choose to use Generative AI tools, they must do so in accordance with this journal's ethical publishing standards. Authors should carefully review any AI-generated content to ensure its accuracy and relevance to their research.

The journal supports responsible use of AI tools for:

  • Generating and exploring research ideas
  • Improving language, especially for non-native speakers
  • Enhancing literature searches
  • Assisting with coding and data classification

However, Generative AI should not replace core research responsibilities, such as rigorous data analysis, text generation without careful review, or the creation of synthetic data without appropriate validation. Any use of these tools must be transparent, and authors must disclose such usage. Generative AI cannot be credited as an author, as it lacks the ability to take responsibility for the work or comply with ethical publishing standards.

Disclosure Requirements

If an author uses AI tools, or not, they must provide a clear AI Declaration statement in the methods or acknowledgments section, indicating the tool used, its version, and the purpose of its use. This ensures transparency in how AI tools have supported the research process. As an example:

AI Declaration [Option 1]

The author(s) of this paper contributed to the concept, writing, and editing and took full responsibility for the paper's content, accuracy, and integrity. In addition, the author(s) declare(s) using ChatGPT - 4 (and/or Consensus, and/or Scite, etc.) as a tool to search for literature (and/or for coding, and/or for brainstorming ideas, etc.), and/or for readability and language. Therefore, all errors, biases, and omissions are the author(s)', not the AI tool.

OpenAI. (2024). ChatGPT (Version X) [Large language model]. https://chat.openai.com/

AI Declaration [Option 2]

The author(s) of this paper contributed to the concept, writing, and editing and took full responsibility for the paper's content, accuracy, and integrity. In addition, the author(s) declare(s) not using any AI tools. All errors, biases, and omissions are the author(s)'.

Generating Images with AI

AI tools must not be used to generate images. Manipulation of images or figures, including altering key features, is also prohibited. In some cases, disclosure and explanation must be provided for generation of images and figures.

Editors and Peer Reviewers

Editors and peer reviewers play a crucial role in upholding the quality and integrity of the research process. While they may use AI tools to assist in the initial assessment of submissions, such as conducting preliminary evaluations or gathering information, they must ensure that confidentiality is strictly maintained. Manuscripts and related materials must not be uploaded into AI systems that could compromise proprietary rights or privacy.

If an editor or peer reviewer chooses to use AI tools in this capacity, they must disclose this usage to the authors in the feedback. This disclosure ensures transparency in the review process and allows all parties to understand how AI tools were applied in evaluating the submission.

Peer reviewers are responsible for the accuracy and integrity of their reviews, and the use of AI tools should not replace critical human judgment. The final review must be carefully assessed by the reviewer and editor to ensure it meets the journal’s standards.

Conclusion

The use of Generative AI in research requires human oversight and transparency. As research ethics guidelines evolve, the journal will update its policies to ensure they remain aligned with best practices in the field. Authors, editors, and reviewers must adhere to these standards to maintain the integrity of the research process.