AI usage policy

Artificial Intelligence Policy
Policy for the use of artificial intelligence (AI) and AI-powered tools

1. Authorship and responsibility
• Only a human can be the author of an article. This follows from current copyright law (AI is not a subject of law) and from international recommendations from the Committee on Publication Ethics (COPE), the Institute of Electrical and Electronics Engineers (IEEE), the Association for Computing Machinery (ACM), and the policies of leading publishers. AI cannot sign a license agreement, provide informed consent, declare a conflict of interest, or bear ethical and legal responsibility.
• Authors are fully responsible for the accuracy of text, data, images, and references, regardless of the tools used.

2. Categories of AI use
• Routine assistance includes spell/grammar checking; automatic formatting; bibliography selection without generating annotations; basic machine translation of a draft. Disclosure is not required for these use cases.
• Substantial generative or analytical assistance includes text generation or substantial paraphrasing; image/table/code creation; literature review or digest creation; full manuscript translation; data analysis using AI. Disclosure is required for these use cases.
• Inappropriate uses include presenting AI-generated content as one’s own without verification; using AI to breach confidentiality (e.g., uploading manuscripts or reviews to open public AI services); using AI to circumvent the review process (e.g., generating responses to reviewer comments without proper understanding); delegating critical analysis to AI without human verification; using AI to create multiple variations of the same paper for submission to different journals. Such uses are prohibited.

3. How to declare significant use of AI
Authors should clearly indicate in the “Methods” or “Acknowledgements” section which AI tool (e.g. ChatGPT, Claude) was used and for what purpose.
Example:
“Part of the text of the article was edited using the Claude model (OpenAI, accessed December 20, 2025). All fragments have been checked and corrected by the authors.”
4. Reliability and verification
Authors are required to check AI-generated fragments for factual errors, bias, fabricated references, or incorrect interpretations.
5. Confidentiality during review
Reviewers and editors should not submit manuscripts or parts thereof to open public AI services. If technical assistance is required, only solutions that guarantee confidentiality (local/corporate) are allowed, or this must be agreed in advance with the editor-in-chief.
6. Violations and Consequences
Failure to comply with this policy may result in rejection of the manuscript, publication of a correction, or formal retraction of the article, in accordance with the journal's ethical standards and COPE guidelines.
7. Policy Review
The document is reviewed at least annually to reflect updates to international guidelines (from COPE, IEEE, ACM, etc.) and technological developments.