News

Nature Bans AI-Generated Images and Videos from Its Pages

The Nature journal has officially announced that it will not publish articles or studies that contain images or videos created using generative AI tools.

The ban came amid the publication’s concerns about research integrity, transparency, credibility and intellectual property protection.

Let me remind you that we wrote that Artificial Intelligence and Generative Tools May Change Politics, and also that OpenAI Is Looking for White Hat Hackers to Fight Cybercrime.

Also information security specialists said that Strange Enthusiasts Asked ChaosGPT to Destroy Humanity and Establish World Domination.

Founded in November 1869, Nature publishes peer-reviewed research from a variety of academic disciplines, primarily in science and technology. It is one of the most cited and most influential scientific journals in the world.

Nature’s recent decision on AI publishing rules follows months of intense discussion and consultation driven by the growing popularity and expanding capabilities of generative tools such as ChatGPT and Midjourney.

With the exception of articles related to artificial intelligence, Nature will not publish any content in which photos, videos or illustrations were created in whole or in part using generative AI, at least for the foreseeable future.the representatives of the publication said.

Nature believes that this issue falls under the journal’s ethical principles regarding the integrity and transparency of published work, which also includes the ability to cite the sources of images used.

As researchers, editors and publishers, we all need to know the sources of data and images so that they can be verified for accuracy and veracity. Existing generative AI tools do not provide access to their sources to make such verification possible.Nature explained.

As a result, for each visual material used in a publication, proof must be provided that the material was not generated or augmented by generative artificial intelligence.

Nature also reminds us that the practice of citing and citing sources is a basic tenet of science. And this requirement represents another obstacle to the ethical use of artificial intelligence works in a scientific journal.

Attribution of AI-generated artwork is extremely difficult or nearly impossible, as the generated images usually result from the processing of millions of other images loaded into the AI model.

This fact also leads to issues regarding consent, especially those related to identity or intellectual property rights. Here, Nature also mentions that generative AIs regularly use copyrighted works for learning, of course, without obtaining the necessary permissions.

In addition, sometimes published works may accidentally or deliberately refer to deepfakes, which can lead to the spread of outright false information.

In the meantime, text generation in published articles is still allowed, even after Nature’s loud statements about the prohibition of specifying ChatGPT as a co-author. The journal still allows the inclusion of text created with generative tools, if this is done with the appropriate caveats. The use of the Large Language Model (LLM) tools should be clearly documented in the relevant sections of the article.

Sources for all data, even those generated by artificial intelligence, must be provided by the authors. However, the journal has firmly stated that no LLM tool will be recognized as the author or co-author of a research paper. It looks like this will never happen.
Sending
User Review
0 (0 votes)
Comments Rating 0 (0 reviews)

Daniel Zimmermann

Daniel Zimmermann has been writing on security and malware subjects for many years and has been working in the security industry for over 10 years. Daniel was educated at the Saarland University in Saarbrücken, Germany and currently lives in New York.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Sending

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button