Generative AI Policy

GENERATIVE AI AND AI-ASSISTED TOOLS POLICY

The International Conference on Rural Development and Entrepreneurship (ICORE) Proceeding

INTRODUCTION

The rapid advancement of generative artificial intelligence (AI) and AI-assisted technologies has brought both opportunities and challenges to the academic publishing ecosystem. These technologies are increasingly being used to support research activities, ranging from literature synthesis, language improvement, and data analysis to content organization. While ICORE Proceeding recognizes the potential of such tools to enhance scholarly efficiency, clarity, and productivity, it also acknowledges the ethical, legal, and academic integrity concerns associated with their use.

To safeguard transparency, originality, and trust in the publication process, ICORE Proceeding establishes this policy as guidance for all contributors, including authors, reviewers, and editors. The policy aims to balance innovation with responsibility, ensuring that AI technologies serve as supportive tools rather than replacements for human scholarly contributions. This policy will be periodically reviewed and updated in alignment with emerging international best practices.

 

FOR AUTHORS

Use of Generative AI in Manuscript Preparation

ICORE Proceeding permits authors to use generative AI and AI-assisted technologies (“AI Tools”) in the preparation of manuscripts under the condition that these tools are applied responsibly and transparently. AI Tools may assist in improving readability, checking grammar, organizing content, summarizing literature, or generating research ideas. However, they must never be relied upon as substitutes for critical reasoning, intellectual judgment, or scholarly originality.

Ultimately, authors bear full responsibility and accountability for the quality, accuracy, and integrity of their manuscripts. Specifically, authors are expected to:

  • Verify accuracy and impartiality of all AI-generated outputs, as AI may produce fabricated references, factual errors, or biased conclusions.
  • Critically revise and adapt AI-generated text to ensure that the manuscript reflects the authors’ own intellectual contribution and authentic academic perspective.
  • Ensure transparency by declaring any use of AI Tools in a dedicated statement upon submission.
  • Respect data privacy and intellectual property rights by reviewing the terms and conditions of AI platforms before uploading sensitive materials, unpublished manuscripts, or confidential datasets.

 Responsible Use

Authors must not use AI Tools in ways that compromise the confidentiality of unpublished work, personal data, or proprietary information. In particular:

  • Personal or sensitive data must not be uploaded into AI systems without ensuring secure processing and compliance with data protection standards.
  • AI Tools must not be used to generate images that reproduce copyrighted material, identifiable persons, brands, or voices.
  • Authors must ensure that AI platforms are not granted ownership rights over uploaded content or generated outputs that could restrict publication.

Disclosure Requirements

To maintain transparency, authors must declare the use of AI Tools in manuscript preparation. The disclosure should include the name of the tool, its version (if applicable), the specific purpose of its use, and the extent of human oversight. For instance, if AI was used to improve language clarity or to assist in literature review, this must be explicitly stated.

Minor editorial uses of AI Tools such as grammar checks, spelling corrections, or formatting adjustments do not require disclosure. However, the use of AI in the research process (e.g., AI-based data analysis or modeling) must be described in detail in the methodology section to ensure reproducibility.

Authorship

AI Tools cannot be listed as authors or co-authors. Authorship requires intellectual responsibility, accountability, and the ability to approve the final version of the work all of which are human attributes. Authors must ensure that:

  • Only individuals who have made substantial scholarly contributions are listed as authors.
  • The work is original, properly cited, and does not infringe upon third-party rights.
  • Each author accepts responsibility for the accuracy and integrity of the work in its entirety.

Use of AI in Figures, Images, and Artwork

The use of generative AI to create or modify images, figures, or artwork is not permitted in ICSEMA Proceeding, except where explicitly justified as part of the research design (e.g., AI-assisted imaging or analysis in data-driven research).

In such cases, authors must:

  • Provide a clear explanation in the methodology section of how AI Tools were used, including the name, model, and version of the software.
  • Retain and, if requested, submit original raw or pre-AI images for editorial verification.
  • Ensure adherence to copyright and licensing requirements.

Generative AI cannot be used to produce graphical abstracts, cover art, or illustrations unless prior approval is granted by the editorial board and appropriate rights clearance is documented.

 

FOR REVIEWERS

Reviewers play a central role in maintaining the credibility and quality of the academic record. All manuscripts under review must be treated as strictly confidential. Reviewers must not upload submitted manuscripts, or portions thereof, into AI Tools for any purpose, including but not limited to summarization, evaluation, or language editing.

Peer review requires critical analysis, subject-matter expertise, and scholarly judgment responsibilities that cannot be delegated to AI systems. The following principles apply:

  • Reviewers remain fully responsible for the accuracy, fairness, and originality of their review reports.
  • Confidentiality extends to both the manuscript and the review report itself. Review reports must not be processed through AI Tools, as this could result in unauthorized storage, misuse, or disclosure of confidential content.
  • If reviewers suspect misuse of AI Tools by authors (e.g., undisclosed AI-generated sections), they should report their concerns to the editors.

ICORE embraces new AI-driven technologies that support reviewers and editors in the editorial process, and we continue to develop and adopt in-house or licensed technologies that respect authors’, reviewers’ and editors’ confidentiality and data privacy rights.

 

FOR EDITORS

Editors hold responsibility for ensuring the integrity of the editorial and publication process. Similar to reviewers, editors must not upload manuscripts, editorial correspondence, or decision letters into AI Tools, as this would compromise confidentiality and potentially violate authors’ rights.

The editorial decision-making process requires human evaluation, independent judgment, and accountability. Therefore, generative AI must not be used to assess manuscripts or guide editorial decisions.

However, ICORE Proceeding may employ in house or licensed AI driven tools for administrative or technical purposes, such as plagiarism detection, completeness checks, or reviewer selection. These tools are evaluated to ensure compliance with ethical standards, data privacy regulations, and fairness principles.

If editors suspect that an author or reviewer has violated this policy, they must notify the editorial board for appropriate investigation and action.

 

CLOSING STATEMENT

The International Conference on Rural Development and Entrepreneurship (ICORE) Proceeding embraces innovation while safeguarding ethical and academic integrity. Generative AI and AI-assisted technologies should be regarded as supportive tools that can complement but never replace the originality, accountability, and intellectual contribution of human scholars.

Through this policy, ICORE Proceeding commits to fostering transparency, protecting confidentiality, and maintaining the highest standards of scholarly publishing.