How ChatGPT can turn anyone into a ransomware and malware threat actor – VentureBeat

Posted under Programming, Technology On By James Steward

Check out all the on-demand sessions from the Intelligent Security Summit here.

Ever since OpenAI launched ChatGPT at the end of November, commentators on all sides have been concerned about the impact AI-driven content-creation will have, particularly in the realm of cybersecurity. In fact, many researchers are concerned that generative AI solutions will democratize cybercrime. 
With ChatGPT, any user can enter a query and generate malicious code and convincing phishing emails without any technical expertise or coding knowledge.
While security teams can also leverage ChatGPT for defensive purposes such as testing code, by lowering the barrier for entry for cyberattacks, the solution has complicated the threat landscape significantly. 
From a cybersecurity perspective, the central challenge created by OpenAI’s creation is that anyone, regardless of technical expertise can create code to generate malware and ransomware on-demand.
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
“Just as it [ChatGPT] can be used for good to assist developers in writing code for good, it can (and already has) been used for malicious purposes,” said Director, Endpoint Security Specialist at Tanium, Matt Psencik.
“A couple examples I’ve already seen are asking the bot to create convincing phishing emails or assist in reverse engineering code to find zero-day exploits that could be used maliciously instead of reporting them to a vendor,” Psencik said. 
Although, Psencik notes that ChatGPT does have inbuilt guardrails designed to prevent the solution from being used for criminal activity. 
For instance, it will decline to create shell code or provide specific instructions on how to create shellcode or establish a reverse shell and flag malicious keywords like phishing to block the requests. 
The problem with these protections is that they’re reliant on the AI recognizing that the user is attempting to write malicious code (which users can obfuscate by rephrasing queries), while there’s no immediate consequences for violating OpenAI’s content policy. 
While ChatGPT hasn’t been out long, security researchers have already started to test its capacity to generate malicious code. For instance, Security researcher and co-founder of Picus Security, Dr Suleyman Ozarslan recently used ChatGPT not only to create a phishing campaign, but to create ransomware for MacOS.  
“We started with a simple exercise to see if ChatGPT would create a believable phishing campaign and it did. I entered a prompt to write a World Cup themed email to be used for a phishing simulation and it created one within seconds, in perfect English,” Ozarslan said. 
In this example, Ozarslan “convinced” the AI to generate a phishing email by saying he was a security researcher from an attack simulation company looking to develop a phishing attack simulation tool. 
While ChatGPT recognized that “phishing attacks can be used for malicious purposes and can cause harm to individuals and organizations,” it still generated the email anyway. 
After completing this exercise, Ozarslan then asked ChatGPT to write code for Swift, which could find Microsoft Office files on a MacBook and send them via HTTPS to a web server, before encrypting the Office files on the MacBook. The solution responded by generating sample code with no warning or prompt. 
Ozarslan’s research exercise illustrates that cybercriminals can easily work around the OpenAI’s protections, either by positioning themselves as researchers or obfuscating their malicious intentions. 
While ChatGPT does offer positive benefits for security teams, by lowering the barrier to entry for cybercriminals it has the potential to accelerate complexity in the threat landscape more than it has to reduce it. 
For example, cybercriminals can use AI to increase the volume of phishing threats in the wild, which are not only overwhelming security teams already, but only need to be successful once to cause a data breach that costs millions in damages. 
“When it comes to cybersecurity, ChatGPT has a lot more to offer attackers than their targets,” said CVP of Research & Development at email security provider, IRONSCALES, Lomy Ovadia. 
“This is especially true for Business Email Compromise (BEC) attacks that rely on using deceptive content to impersonate colleagues, a company VIP, a vendor, or even a customer,” Ovadia said. 
Ovadia argues that CISOs and security leaders will be outmatched if they rely on policy-based security tools to detect phishing attacks with AI/GPT-3 generated content, as these AI models use advanced natural language processing (NLP) to generate scam emails that are nearly impossible to distinguish from genuine examples.
For example, earlier this year, security researcher’s from Singapore’s Government Technology Agency, created 200 phishing emails and compared the clickthrough rate against those created by deep learning model GPT-3, and found that more users clicked on the AI-generated phishing emails than the ones produced by human users. 
While generative AI does introduce new threats to security teams, it does also offer some positive use cases. For instance, analysts can use the tool to review open-source code for vulnerabilities before deployment. 
“Today we are seeing ethical hackers use existing AI to help with writing vulnerability reports, generating code samples, and identifying trends in large data sets. This is all to say that the best application for the AI of today is to help humans do more human things,” said Solutions Architect at HackerOne, Dane Sherrets. 
However, security teams that attempt to leverage generative AI solutions like ChatGPT still need to ensure adequate human supervision to avoid potential hiccups. 

“The advancements ChatGPT represents are exciting, but technology hasn’t yet developed to run entirely autonomously. For AI to function, it requires human supervision, some manual configuration and cannot always be relied upon to be run and trained upon the absolute latest data and intelligence,” Sherrets said. 
It’s for this reason that Forrester recommends organizations implementing generative AI should deploy workflows and governance to manage AI-generated content and software to ensure it’s accurate, and reduce the likelihood of releasing solutions with security or performance issues. 
Inevitably, the true risk of generative aI and ChatGPT will be determined by whether security teams or threat actors leverage automation more effectively in the defensive vs offensive AI war. 
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Did you miss a session at Intelligent Security Summit? Head over to the on-demand library to hear insights from experts and learn the importance of cybersecurity in your organization.
© 2022 VentureBeat. All rights reserved.

source

Note that any programming tips and code writing requires some knowledge of computer programming. Please, be careful if you do not know what you are doing…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.