ChatGPT has one hundred million users in less than two months, which looks like an existential cybersecurity threat to Google. There is this ubiquitous digital euphoria mixed with fears about the future workplace. Another technological “revolution” is happening, which will affect everything and everyone. Of course, the main culprit behind this viral sensation is ChatGPT, OpenIA’s generative artificial intelligence, which has the potential to finally give us the ability to “talk” humanly to the Internet without it looking like a weird linguistic experiment created by yet another lunatic a tech genius from Silicon Valley.
After the global audience casually enjoyed the capabilities of ChatGPT, somewhere among the exclamations of “it responds like a human,” some anxious thoughts somehow naturally formed the nagging question: “Is this a security threat?” The need for a meaningful answer was further reinforced by Microsoft’s announcement of the new Bing integrated with ChatGPT, as well as the sudden appearance of Bard, Google’s hasty response that made everyone realize that soon Internet searches would change forever.
With a certain sense of déjà vu and urges for edifying outbursts, the cybersecurity expert community has urgently begun trying to clarify what risks the widespread use of artificial intelligence, based on a language model fed by some 300 billion words and phrases, brings with it, systematically extracted from the Internet, including sensitive business information and personal data.
Quite logically and expectedly, for this kind of technology, ChatGPT turns out to be quite adept at social engineering, especially at generating convincing-looking phishing emails and chat messages. And while this doesn’t sound like something unfamiliar and intimidating, it should be noted that the possibility of realistic dynamic interaction exponentially multiplies the possible sources of attack, easily overcoming language barriers and the lack of knowledge of the specifics of the utterance.
Cybersecurity experts have been able to demonstrate how easy it is to generate a malicious code using ChatGPT, which at first glance seems startling, but a little closer look at the “what if” scenario makes it clear that this action in itself is a small part of the cyber-attack life cycle, and “serious” attackers are far from needing the assistance of ChatGPT to get their work done.
In addition to how we can “hack” through ChatGPT arose the question of how it and other chatbots like it could be attacked, with the primary concern coming from the so-called Prompt Injection (or Prompt Hacking attack), which is the introduction of misleading or malicious input text into the prompt of the relevant AI to reveal information which is hidden from the user or to provoke unexpected or prohibited behavior.
Indeed, the cybersecurity voices urging everyone to be very careful with ChatGPT are a bit late and unlikely to change anything.
Still, at least it can serve as a starting point for understanding the dynamic nature of cyber risks and how to deal with them. ChatGPT will not tell us anything new. The basic principles regarding cybersecurity remain the same.
You don’t know where to start from?
Get in touch today to help you secure your company’s digital presence!