By security practitioners, for security practitioners novacoast federal | Apex Program | novacoast | about innovate
By security practitioners, for security practitioners

OpenAI Case Study On Disrupting Malicious Use Of AI

OpenAI has released a comprehensive statement complete with case studies titled “Disrupting Malicious Use of AI” in which they detail their efforts to combat the growing use of their own AI tools to perpetrate activity that is against the best interest of humanity, such as: covert influence operations, child exploitation, scams, spam, and malicious cyber activity.

Generative Artificial Intelligence tools such as the now-ubiquitous ChatGPT are force multipliers for both sides of the battle in writing code, producing malware for cybercrime campaigns, and breaking down language barriers for phishing campaign authors, but also for detection and enhancement of tools for security analysts and threat hunters.

In the statement, released June 6, OpenAI included multiple case studies detailing use of AI in malicious campaigns. Their executive summary opens with:

“Our mission is to ensure that artificial general intelligence benefits all of humanity” and “[the effort] includes using AI to defend against such abuses. By using AI as a force multiplier for our expert investigative teams, in the three months since our last report we’ve been able to detect, disrupt and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams.”

The statement also included their finding that 4 in 10 of the cases they profiled originated in China.

Read the complete paper here.

Previous Post

The State of Cyber Espionage in 2025

Innovate uses cookies to give you the best online experience. If you continue to use this site, you agree to the use of cookies. Please see our privacy policy for details.