Responsible AI Use Policy
Responsible AI Use and Similarity Screening Policy
This policy establishes the ethical and operational standards governing the use of Artificial Intelligence (AI) and similarity screening tools in all SUKISOK Journal of the Arts and Sciences processes. It safeguards research integrity, transparency, and accountability across authorship, peer review, and editorial decision making.
This policy applies to all authors, editors, reviewers, production staff, and any personnel involved in manuscript preparation, evaluation, and publication.
Guiding Principles
Transparency
All AI-assisted activities related to manuscript preparation, review, or editorial processes must be truthfully disclosed through the Journal Agreement Form.
Accountability
Authors, reviewers, and editors remain fully responsible for the accuracy, originality, analyses, and conclusions of work produced with or without AI assistance.
Integrity
AI tools must not be used to fabricate, falsify, manipulate, or misrepresent data, text, citations, images, or research findings.
Confidentiality
Confidential manuscript content must not be uploaded to AI systems that do not guarantee secure, private, and non-retentive processing.
Acceptable Uses of AI
When properly disclosed through the Journal Agreement Form, AI tools may be used for:
• Grammar, spelling, and language refinement
• Formatting support and reference management
• Summarization of non-confidential materials
• Technical assistance such as metadata extraction, similarity screening, or statistical verification
• Generating preliminary outlines or conceptual ideas that are critically reviewed and substantially revised by the author
Prohibited Uses of AI
The following uses of AI are strictly prohibited:
• Fabrication or manipulation of data, citations, figures, or textual content
• Generation of substantial manuscript sections without meaningful human intellectual contribution
• Drafting peer review reports or editorial evaluations using AI
• Uploading confidential or unpublished manuscript content to public or unsecured AI platforms
• Using AI to distort findings, evade plagiarism detection, or misrepresent originality
• Allowing AI systems to independently make editorial or publication decisions
AI Use in Manuscript Submissions
Declaration via Journal Agreement Form
All authors must complete the Journal Agreement Form, which includes a mandatory declaration of AI use. The declaration must specify:
• AI tools used, if any
• The purpose and scope of AI assistance
• Confirmation that the authors assume full responsibility for the manuscript
Failure to provide an accurate and complete declaration may result in administrative delay, return of the manuscript, or rejection.
Editorial Evaluation
Editors will review AI-use declarations during the initial screening process. Manuscripts may be returned for clarification, required revision, or rejected if AI use violates this policy or is inadequately disclosed.
AI Use in Peer Review
Confidentiality Requirement
Reviewers must not upload any part of a manuscript or associated materials into AI systems that store, reuse, or train on user inputs.
Permissible Reviewer Use
Reviewers may use AI tools only to improve the clarity or language of their own comments, provided that no confidential manuscript content is entered.
Reviewer Responsibility
Reviewers retain full responsibility for the objectivity, scholarly rigor, accuracy, and ethical soundness of their evaluations.
AI Use in Editorial Processes
Editors may use AI tools strictly for supportive and administrative purposes, including:
• Similarity and plagiarism screening
• Metadata verification
• Reference accuracy checks
• Workflow and manuscript tracking
All editorial judgments and publication decisions must be made exclusively by human editors.
Similarity Screening Using Turnitin
• All manuscripts submitted to SUKISOK will undergo similarity screening using Turnitin.
• A similarity index of fifteen percent or lower is generally considered acceptable.
• Manuscripts exceeding this threshold or containing problematic overlaps will be reviewed by the Editorial Board.
• Based on similarity findings, the Editorial Board may accept, return, request revision, or reject a manuscript.
• Authors may be required to provide explanations, corrections, or rewritten sections prior to peer review.
• Failure to satisfactorily address similarity concerns will result in rejection.
Prohibition of Generative AI as Co-authors
Generative AI systems may not be listed as authors or co-authors.
• Authorship is restricted to human contributors who meet established academic authorship criteria.
• AI tools may assist researchers but cannot assume authorship, accountability, or intellectual ownership.
• Manuscripts listing generative AI systems as authors or co-authors will be automatically returned or rejected.
Data Protection Requirements
All AI tools used in editorial or publication processes must comply with ISU data privacy regulations and applicable national data protection laws. Confidential, sensitive, or identifiable information must not be processed using unsecured or non-compliant systems.
Compliance and Sanctions
Violations of this policy may result in one or more of the following actions:
• Rejection of a submitted manuscript
• Retraction of a published article
• Removal from the reviewer or editorial pool
• Notification to the relevant author’s or reviewer’s institution
Policy Review
This policy will be reviewed every two years, or as necessary, by the Editorial Board to ensure continued alignment with ethical standards, technological developments, and best practices in scholarly publishing.