I finally broke down and spent a little time playing with ChatGPT. It’s pretty cool and I think will have long ranging security impacts - especially to phishing and social engineering based attacks. Like all tools, it is inherently neither good nor evil. It’s simply about how you use it. On that note, it’s useful to look at tools from a capabilities perspective, and then to understand how those capabilities can impact the problem you’re assessing.
Before we dig in, I want to make a few points. For starters, with nascent technologies, the sky is almost never falling. As it relates to security, this is certainly the case here. Nothing here suggests that the security game has changed overnight. Secondly, I would like to say that I’m using ChatGPT just because it is accessible and fun to mess around with - but this is not any sort of indictment of their technology. They have done quite a bit to try to prevent these scenarios (see some of the policy alerts in the dialogues below). The point of this is simply to illustrate the broader implications of AI and how it may interact with humans. Lastly, we should assume that these capabilities get 10x better over some short period of time. They are the core innovations and will improve. That does provide some time horizon upon which everyone should protect themselves.
As it relates to security, here are the immediately obvious capabilities:
Capability 1: The ability to craft compelling, contextual, error free messages
These messages aren’t amazing. But… they’re not terrible. That’s saying something since they were auto generated. Whether we like it or not, one of the biggest tells, still, on phishing and social engineering messages is frankly just typos and awkward english.
(Actual message sent to C1 employee)
Awkwardly crafted messages and language is like the faint whiff of natural gas where there’s a gas leak. It’s the thing that immediately makes someone’s ears perk up and think - something is not quite right here.
Assuming this gets better, ChatGPT clearly solves this problem entirely. The ability to craft good, error free, contextual, non-awkward messages is on the horizon.
Capability 2: The ability to have AI driven conversations at scale
AI bots allow you to scale interacting with humans in a potentially very authentic way. I only did a little testing on this, but you can feed the AI information about the recipient and that context can be used to generate responses. What’s their role? Are they shy? Where do they live? How long have you worked with them? All of this information can be leveraged to create a more compelling auto generated dialog.
The future: An asymmetrical shift in phishing and social engineering attacks
We have to assume that these capabilities get 10x better over the next few years. What does that mean for security? Most obvious: at scale social engineering and phishing will get way better and will be 100x cheaper. It used to be that phishing was either (a) broad based and unsophisticated because it was machine generated and spammy or (b) targeted and very good because it was research based and human in the loop. These technical advancements converge (a) to (b): they allow machines to execute broad based attacks at scale that look like, and are as good as, targeted attacks. Which means, they can engage all of the employees at your company, simultaneously, with contextual, responsive, authentic dialogue. Since these attacks are just numbers games, someone WILL click the link or respond to the message.
What you should do about it
The great news is, the security answer is still the same as before: strong, cryptographic authentication, least standing privilege, and just in time access control. Everyone needs to ditch SMS, OTP, push verification, secrets and any phishable or social engineerable credential and use WebAuthn or Passkey based authenticators… yesterday. If you’re not already on this journey as a company, make it a priority for 2023. And we need to move from over privileged, birthright driven access to just in time access provisioning and least standing privilege, especially for sensitive apps and resources. Identity is and will continue to be one of the most important threat vectors. Attackers are logging in, not breaking in. Over privileged users are an open risk to your organization.