Type Here to Get Search Results !

Wiley Issues New Guidelines to Clarify Responsible Use of AI in Research

Global academic publisher Wiley has introduced a comprehensive set of guidelines outlining how artificial intelligence (AI) should be used responsibly across the research and publishing process. The new framework, developed in consultation with over 40 authors, editors, and internal experts, aims to promote transparency, integrity, and reproducibility in an era of widespread AI adoption.

According to Wiley, the guidelines provide clarity for researchers, journal editors, and peer reviewers who increasingly rely on AI tools for writing, data analysis, image generation, and literature review. Recent surveys indicate that up to 84 per cent of researchers have experimented with AI in some form, while nearly three-quarters of respondents expressed a need for clearer publisher direction.

At the core of Wiley’s policy are requirements for AI disclosure. Authors are urged to state explicitly how and where AI tools have been used—whether in data collection, manuscript drafting, analysis, or visual creation. The publisher emphasises that the use of AI alone should not lead to automatic manuscript rejection; instead, editorial decisions should focus on research quality, transparency, and ethical conduct.

For peer reviewers and editors, the guidelines draw a firm line on confidentiality. Uploading unpublished manuscripts to public AI tools is strictly prohibited, protecting intellectual property and ensuring the integrity of the review process. The rules also differentiate between factual and conceptual imagery: AI-generated or AI-edited photographs are banned where factual evidence is required, but may be permitted for conceptual illustrations if properly labelled.

Another major pillar of the framework is reproducibility. By documenting AI involvement, Wiley hopes to ensure that future researchers can evaluate and replicate findings accurately, maintaining scientific rigour amid evolving technologies.

Industry observers note that Wiley’s move could set a precedent for other publishers as the academic community grapples with the benefits and risks of generative AI. The publisher positions its guidance as a “path forward” rather than a restriction—encouraging researchers to use AI ethically while preserving trust in scholarly communication.
For researchers, the implications are immediate. Those using AI in their workflow are advised to maintain detailed records of tool use, include appropriate disclosures in manuscripts, and verify that their journals’ policies align with the new standards. Reviewers and editors are likewise encouraged to reassess their use of AI tools to ensure compliance with confidentiality and ethical norms.


As AI continues to reshape the research landscape, Wiley’s guidelines mark an important milestone in defining how innovation and integrity can coexist in global scholarship.

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.