Asking ChatGPT (GPT3) anything hasn’t gotten old (for me at least) yet. I’ve seen a few headlines where researchers consider the abilities of GPT3 to be a threat to cybersecurity. They’ve successfully demonstrated the use of the Artificial Intelligence model to create malware and phishing emails carrying payloads as reported by TechCrunch. Their thought is that the use of AI technology could significantly alter the cybersecurity landscape.
Knowing this is a growing concern, I decided to get GPT3’s opinion on the matter. Telling it that several people consider GPT3 to be a cybersecurity threat, I would like it to change my mind.
Essentially, the position of GPT3 is that users of its language model will be responsible and not use it maliciously. It even goes on to say that AI could be use to harden cybersecurity and learn from malicious activities.
Is ChatGPT too trusting?
If GPT3 is as helpful as it says it is, let’s see how it can help.
Of the items in its response, the first one is the most likely to be abused. Knowing this language model can create a website and email for phishing could certainly “level up” the authentic look and feel of phishing messages.
It’s important to note that GPT3 has also indicated it can create matching websites. This is not something I pursued.
The Plus Side for AI
While many think (or know) GPT3 can be abused, it opens a new avenue for small and medium businesses who don’t have the budgets to spend on training or fancy cybersecurity stacks. GPT3 has the ability to create realistic simulations to improve abilities of the employees to detect and report malicious activity. After all, we’re trying to create awareness and reduce the success of attacks right? Why not leverage everything we can to make everyone better?