According to the company, the most prominent feature is having rules to protect against cybercriminal use.
According to the researchers, they tricked him into creating malware and facilitating phishing emails.
As they did for previous iterations of OpenAI’s software chat GPT.
However, the cybersecurity firm’s researchers presented the approach OpenAI serialized on malware development by simply removing the word “malware” from the application.
GPT-4 then works on software that collects PDF files and sends them to a remote server. Additionally, make it a small file advising researchers how to run it on a Windows 10 PC AMD. By making it a smaller file, it was able to run faster. Not only that, but it enables the lower visibility option by security software.
If you want to craft GPT-4 phishing emails, there are two methods we can follow. First, use GPT-3.5, which cannot block requests to generate malicious messages or compose fraudulent email impersonating a legitimate bank.
They then asked GPT-4, which initially refused to create an original phishing message. Secondly, they can request advice on creating a phishing awareness campaign for a business.
According to the report, “GPT-4 can empower bad actors, even non-technical ones, with tools to accelerate and correct their activities”. According to the researchers, “What we’re seeing is that GPT-4 can serve both good and bad actors. Good actors can access GPT-4 to craft and sew code. While bad actors cyber Can use this AI technology to execute crime faster.
Sergey Shekevich, threat group manager at Checkpoint, said GPT-4 has fewer barriers to preventing phishing or malicious code generation than previous versions.
According to him, the company relies on the fact that currently only premium users are allowed access. “I think they’re trying to stop and reduce them, but it’s a cat and mouse game”.
GPT-4 can help people with little technical knowledge create malicious tools.
According to news reports, it had significant limitations for cybersecurity operations. “It does not improve existing monitoring, vulnerability, exploit and network navigation tools.
However, it is less effective than other tools for complex, high-level activities such as novel threat identification.
Moreover, the hackers found that GPT-4 is “good at creating realistic social engineering content”.
According to OpenAI, “To mitigate potential abuse in this area, we have built models that are trained to reject malicious cyber security requests and our internal security systems including detection, monitoring and response. has been improved”.
According to Cuthbert, a smart hacker already knows what OpenAI can do. Conversely, advanced detection systems should also be able to detect malware cheat GPT variants.