News

OpenAI Is Looking for White Hat Hackers to Fight Cybercrime

OpenAI, creator of the popular ChatGPT and Dall-e apps, is launching a $1 million cybercrime grant program.

The goal of the program is to enhance and measure the effectiveness of artificial intelligence (AI)-based cybersecurity and to promote a high-level discussion about the interaction between AI and cybersecurity.

OpenAI invites security professionals from all over the world to collaborate to change the balance of power in the field of cybersecurity through the use of AI and the coordination of the efforts of people working for the benefit of public safety.

Let me remind you that earlier we wrote that OpenAI Launches Its Own Bug Bounty Program, and also that Amateur Hackers Use ChatGPT to Create Malware.

And information security specialists told that ChatGPT Causes New Wave of Fleeceware.

OpenAI offers a range of interesting project ideas, from developing traps for attackers to helping developers build secure software by default and streamlining security patch management processes.

The program has three main goals: First, it seeks to “strengthen defenders” by providing them with cutting-edge AI capabilities. Second, it aims to “measure capabilities” by developing methods to quantify the cybersecurity capabilities of AI models. Third, it wants to “raise the level of discourse” by encouraging deep discussions at the intersection of AI and cybersecurity.

This initiative challenges the traditional view of cybersecurity.

Defenders need to be right 100% of the time, attackers only need to be right once.OpenAI wants to emphasize the relevance of this expression.

However, the company sees the importance of cooperation in achieving a common goal – ensuring the safety of people. It proves that AI-armed defenders can make a difference.

George Kurtz
George Kurtz

Earlier, CrowdStrike CEO George Kurtz called AI an “arms race” and assured that the AI threat could be countered.

He stressed that his company has long struggled with AI in the service of foreign powers. Government hackers are using generative AI to bypass detection and break into targeted systems, he said.

So this is one of those areas where you need better AI, you have to have a better dataset that we think is human annotated information so that we can train our generative AI algorithms. AI is an arms race and we think we’re in a good position.Kurtz said.
Sending
User Review
0 (0 votes)
Comments Rating 0 (0 reviews)

Daniel Zimmermann

Daniel Zimmermann has been writing on security and malware subjects for many years and has been working in the security industry for over 10 years. Daniel was educated at the Saarland University in Saarbrücken, Germany and currently lives in New York.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Sending

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button