Use of AI
Introduction
Artificial Intelligence (AI) represents a powerful tool with the potential to transform scientific research and publication. However, its use must be ethical, transparent, and responsible to safeguard the integrity and credibility of scientific knowledge. This guide establishes good practices for authors, reviewers, and other actors involved in the editorial process of Ciencias Sociales y Educación, in line with the guidelines of the Committee on Publication Ethics (COPE) and relevant international agreements and treaties on scientific writing and the use of AI.
Fundamental Principles
- Human Responsibility: AI is a supporting tool; the final responsibility for content and decision-making always rests with human beings.
- Transparency: The use of AI must be explicitly disclosed at all stages of the process.
- Academic Integrity: AI must not be used to plagiarize, fabricate data, or manipulate results.
- Impartiality and Equity: AI must be used in a way that does not introduce biases or discrimination.
- Confidentiality and Data Protection: The protection of sensitive and personal information must be guaranteed when using AI.
I. Guide for Authors
Authors are primarily responsible for the originality, veracity, and integrity of their manuscripts. The use of AI tools should support these principles, not replace them.
1. Use of AI in Content Writing and Generation:
- Mandatory Disclosure: If AI (e.g., large language models like ChatGPT, Bard, or other text generators) has been used for writing, rewriting, translation, or any other content generation in the manuscript, it must be clearly disclosed in the "AI Usage Statement" section at the end of the manuscript, before the references. This disclosure must specify:
- The AI tool or model used.
- The version of the tool (if applicable).
- For what specific purpose it was used (e.g., for writing improvement, for literature synthesis, for generating preliminary ideas, etc.).
- A statement indicating that the author or authors are solely responsible for the final content and have thoroughly reviewed and edited any AI-generated text.
- Not Authors: AI tools cannot be cited as authors. Authorship implies responsibility and accountability for the work, which is not applicable to software.
- Verification and Review: All AI-generated content must be carefully reviewed, verified, and edited by the authors to ensure its accuracy, coherence, originality, and suitability for the legal context. AI can generate incorrect or biased information ("hallucinations").
- Plagiarism and Originality: AI must not be used to generate text that is presented as original when it, in fact, reproduces ideas or expressions of other authors without proper citation. Authors remain responsible for avoiding plagiarism, even if the text was generated with AI assistance. The journal's editorial team will use plagiarism detection software for all manuscripts, regardless of AI use.
- Data Manipulation: AI must not be used to fabricate or manipulate data, analyses, or results that do not exist or are not derived from actual research.
- Citations and References: AI can generate incorrect or nonexistent references. Authors must manually verify all citations and references generated or suggested by AI tools.
2. Use of AI in Information Collection and Analysis:
- Transparency: If AI tools are used for literature search, analysis of rulings, identification of patterns in legal data, etc., this must be detailed in the "AI Usage Statement" section, including the specific tool and the methodology of its application.
- Biases: Authors must be aware of potential biases inherent in the data used to train AI models and how these can influence the results obtained. This issue should be critically addressed in the manuscript's discussion.
II. Guide for Reviewers (Peer Reviewers)
Reviewers play a crucial role in ensuring the quality and integrity of manuscripts. Their role regarding the use of AI in manuscripts and in their own evaluation process is fundamental.
1. Evaluation of Manuscripts Using AI:
- Detection of Undisclosed Use: If a reviewer suspects that a manuscript has used AI in an undisclosed manner (e.g., by identifying unusual writing patterns, "hallucinations," or inconsistencies in information), they must immediately inform the editor.
- Content Verification: Reviewers should pay special attention to the accuracy and coherence of AI-generated content declared by the authors. It is essential to verify that the information is correct and that there are no distortions or fabrications.
- Originality and Plagiarism: Reviewers must remain vigilant for possible cases of plagiarism, even in manuscripts that declare AI use. The use of plagiarism detection tools remains a good practice.
- Clarity and Transparency: It must be assessed whether the authors' disclosure of AI use is clear, complete, and appropriate.
2. Use of AI in the Evaluation Process:
- Confidentiality: Reviewers must not input confidential manuscripts into AI tools (e.g., publicly accessible language models like ChatGPT) that may store or use the text for their training. This would violate the confidentiality of the peer review process.
- Support Tools (under strict supervision): Reviewers may use AI tools for auxiliary tasks, such as grammar checking or translation, provided that the confidentiality of the manuscript is ensured. If the tool requires uploading the manuscript, it must be a secure platform approved by the journal, with guarantees of no storage or use of content for training. In case of doubt, it should not be used.
- No Substitution of Professional Judgment: AI must not be used to generate evaluation summaries, revision suggestions, or acceptance/rejection decisions. The critical and professional judgment of the reviewer is irreplaceable.
- Disclosure: If a reviewer has used AI tools to support their review (and this has been approved by the journal if it involves handling the manuscript text), they should consider disclosing it to the editor.
III. Guide for Editors and Editorial Team
Editors are the custodians of the journal's integrity and the editorial process. Their role is crucial in the implementation and application of these good practices.
1. Management of the Submission and Review Process:
- Training and Awareness: Educate authors, reviewers, and the editorial team on the ethical use of AI and associated risks.
- AI Use Detection: Consider implementing AI-generated text detection tools as part of the initial review process, while recognizing their limitations and the need for human evaluation.
- Handling Misconduct: Develop a protocol for addressing cases of inappropriate AI use, including lack of disclosure, AI-assisted plagiarism, or data fabrication. COPE guidelines for investigating potential misconduct will be followed.
- Data Confidentiality: Ensure that any AI tools used internally in the editorial process (e.g., for manuscript management) comply with privacy and confidentiality standards.
2. Transparency and Traceability:
- Record of Declarations: Maintain a record of AI usage declarations by authors and, if applicable, by reviewers.
- Clarifications in Publications: If a published article is subsequently identified with inappropriate undisclosed AI use, the editorial team will consider publishing a correction, an expression of concern, or a retraction, depending on the severity and COPE guidelines.
- Rejection of Submitted Text: The journal's editorial team may reject a submitted article whose authors have not declared the use of AI, or even if they have, if the percentage of use is high. For articles where the AI detection software used in the editorial process indicates an AI usage percentage greater than 30%, the editorial team reserves the right to reject the submission.
IV. Regulatory Framework and References
This guide is based on the following references and principles:
- Committee on Publication Ethics (COPE): COPE guidelines are fundamental for ethics in scientific publication. Specifically, reference is made to their positions on the use of AI in publication.
- Singapore Statement on Research Integrity (2010): Emphasizes honesty, accountability, professionalism, and good stewardship in research. These principles are fully applicable to the use of AI.
- Montreal Principles for Responsible AI (2017): Although broader, their principles of well-being, privacy, autonomy, justice, knowledge, responsibility, and prudence are applicable to the development and use of AI in research and publication.
- UNESCO Recommendation on the Ethics of Artificial Intelligence (2021): Provides a global framework for the ethical development and use of AI, with principles such as fairness, transparency, explainability, and accountability.
- Guidelines from Publishers and International Organizations: We adhere to the editorial guidelines of COPE, Elsevier, Wiley, WAMA, JAMA, among others, which were generally taken into account in the preparation of this guide.
Conclusion
The advancement of Artificial Intelligence presents both opportunities and challenges for the scientific publication ecosystem. By adhering to these good practices, Ciencias Sociales y Educación seeks to ensure that the use of AI promotes quality, transparency, and integrity in legal research, always maintaining the primacy of human responsibility and judgment. Collaboration among authors, reviewers, and editors is essential to navigate ethically and effectively in this new era of scientific publication.
Credits:
This guide was prepared with the help of Gemini.IA, version 2.5 Flash.