Nature Bans AI-Generated Images and Videos from Its Pages
The Nature journal has officially announced that it will not publish articles or studies that contain images or videos created using generative AI tools.
The ban came amid the publication’s concerns about research integrity, transparency, credibility and intellectual property protection.Let me remind you that we wrote that Artificial Intelligence and Generative Tools May Change Politics, and also that OpenAI Is Looking for White Hat Hackers to Fight Cybercrime.
Also information security specialists said that Strange Enthusiasts Asked ChaosGPT to Destroy Humanity and Establish World Domination.
Founded in November 1869, Nature publishes peer-reviewed research from a variety of academic disciplines, primarily in science and technology. It is one of the most cited and most influential scientific journals in the world.
Nature’s recent decision on AI publishing rules follows months of intense discussion and consultation driven by the growing popularity and expanding capabilities of generative tools such as ChatGPT and Midjourney.
Nature believes that this issue falls under the journal’s ethical principles regarding the integrity and transparency of published work, which also includes the ability to cite the sources of images used.
As a result, for each visual material used in a publication, proof must be provided that the material was not generated or augmented by generative artificial intelligence.
Nature also reminds us that the practice of citing and citing sources is a basic tenet of science. And this requirement represents another obstacle to the ethical use of artificial intelligence works in a scientific journal.
Attribution of AI-generated artwork is extremely difficult or nearly impossible, as the generated images usually result from the processing of millions of other images loaded into the AI model.
This fact also leads to issues regarding consent, especially those related to identity or intellectual property rights. Here, Nature also mentions that generative AIs regularly use copyrighted works for learning, of course, without obtaining the necessary permissions.
In addition, sometimes published works may accidentally or deliberately refer to deepfakes, which can lead to the spread of outright false information.
In the meantime, text generation in published articles is still allowed, even after Nature’s loud statements about the prohibition of specifying ChatGPT as a co-author. The journal still allows the inclusion of text created with generative tools, if this is done with the appropriate caveats. The use of the Large Language Model (LLM) tools should be clearly documented in the relevant sections of the article.