ChatGPT a Security Threat or Misunderstood Opportunity?

Steve
4 min readMar 14, 2023

--

AI in the workplace
AI in the workplace

I remember the early days of social media. MySpace was growing extremely fast and Facebook just opened access to my college. Given both platforms, everyone was wondering what’s the point of these sites? Mind you AOL instant messenger was still the way most everyone communicated “instantly”.

Shortly thereafter, LinkedIn started making its way through corporate America. I also remember this as I had just graduated college and was working my first job within the Big 4. In fact, I remember it because I presented it to several from my department — an entry-level associate was presenting a professional social media platform so tenured senior managers. We were all scratching our heads as to how best to use it… or could we even use it? The Big 4 are known for low profiles, risk adverse approaches and social media was no exception. The Senior Managers even consulted their legal and risk team only to find that “We can’t stop you from using it, but please do not post about our company or any association with our company”.

Early days of social media, right? Looking around now, everyone who is anyone has an account on all the major platforms enabling them to consume or create content. Vast difference from the inception of these platforms today. All along the way we’ve seen brand new technologies take shape and insert themselves into our everyday lives. Cloud services, new social media platforms (Twitter, TikTok, SnapChat, and several other one hit wonders), and ever security products like VirusTotal. (By the way, VirusTotal is a treasure trove of corporate material. Think about how individuals can upload anything to that site. I mean anything.)

Are these platforms a security threat? Possibly. They do enable simplicity with sharing of information. Pop open the “create” function on any app via your phone and you can instantly share any content from your phone. Going to add a pic to your post? Double check you’re not adding a corporate whiteboard photo. Going to stream video to your fan base? Be sure to not capture all the background noise in your cubes as to divulge company secrets.

Is ChatGPT any different? No not really. And that is the major concern with everyone who understands the power of AI. The challenge is that corporations don’t control the information that ChatGPT consumes. Once data is processed by ChatGPT, you and your organization lose control over it. According to OpenAI’s March 2023 Terms of Service, “We may use Content from Services other than our API (“Non-API Content”) to help develop and improve our Services.”

Snippet of OpenAI Terms of Service
Snippet of OpenAI’s terms of service

Further, OpenAI publised a blog post defining how they will use your data. The good news is that any data submitted via their API is not used to train their models. However, any data submitted via non-API methods, such as the web user interface, can be used to train the model.

OpenAI blog post on how data is used
OpenAI blog snippet on how data is used

There’s no secret that ChatGPT has everyone thinking about artificial intelligence. It’s also no secret that employees are starting to use ChatGPT because it’s free. The challenging part of this is that most users are not programmers, they may not know how to spell API, and potentially are submitting corporate secrets to gain a leg on the competition for that promotion, extra sale, or retain talent.

Now, I have to ask: is ChatGPT to blame or have we not trained our users correctly? Looking at the last 20 years of technology disruptors — social media, VirusTotal, and now ChatGPT — we can certainly learn from our own mistakes with end user awareness training. Short to say enhancing training materials to incorporate the use of social media was slow, our Users had to learn the hard way about compliance and protecting corporate secrets. Additionally, I have yet to hear of an organization that explicitly lists public sandboxes (like VirusTotal) within their security training and discouraging the use of those sites.

Considering we’re on the cusp of an AI revolution, we may want to learn from our prior experiences and proactively start talking about ChatGPT, AI, and everything inbetween. The popularity will only continue to increase by our users and threat actors. The more we create awareness, teach good use of the “free” service, and promote our users to uphold our corporate integrity, we lessen the chance of the service to be abuse by our insiders. (note: I purposely didn’t mention the threat actors because they will abuse anything).

My message to CISOs, risk managers, and cybersecurity trainers: Incorporate ChatGPT and AI into your policies, procedures, training materials, TedTalks, fireside chats, and every other form of communication. It is on everyones minds and people will continue to explore. We need to set parameters for them to operate just as we have done with social media in the workplace.

--

--

Steve

Cybersecurity evangelist and cybercrime investigator who has investigated over thousands of events with ransomware, insider threat, and regulatory inquiries.