Generative Data Intelligence

Navigating the security and privacy challenges of large language models

Date:

Business Security

Organizations that intend to tap the potential of LLMs must also be able to manage the risks that could otherwise erode the technology’s business value

Navigating the security and privacy challenges of large language models

Everyone’s talking about ChatGPT, Bard and generative AI as such. But after the hype inevitably comes the reality check. While business and IT leaders alike are abuzz with the disruptive potential of the technology in areas like customer service and software development, they’re also increasingly aware of some potential downsides and risks to watch out for.

In short, for organizations to tap the potential of large language models (LLMs), they must also be able to manage the hidden risks that could otherwise erode the technology’s business value.

What’s the deal with LLMs?

ChatGPT and other generative AI tools are powered by LLMs. They work by using artificial neural networks to process enormous quantities of text data. After learning the patterns between words and how they are used in context, the model is able to interact in natural language with users. In fact, one of the main reasons for ChatGPT’s standout success is its ability to tell jokes, compose poems and generally communicate in a way that is difficult to tell apart from a real human.

RELATED READING: Writing like a boss with ChatGPT: How to get better at spotting phishing scams

The LLM-powered generative AI models, as used in chatbots like ChatGPT, work like super-charged search engines, using the data they were trained on to answer questions and complete tasks with human-like language. Whether they’re publicly available models or proprietary ones used internally within an organization, LLM-based generative AI can expose companies to certain security and privacy risks.

5 of the key LLM risks

1. Oversharing sensitive data

LLM-based chatbots aren’t good at keeping secrets – or forgetting them, for that matter. That means any data you type in may be absorbed by the model and made available to others or at least used to train future LLM models. Samsung workers found this out to their cost when they shared confidential information with ChatGPT while using it for work-related tasks. The code and meeting recordings they entered into the tool could theoretically be in the public domain (or at least stored for future use, as pointed out by the United Kingdom’s National Cyber Security Centre recently). Earlier this year, we took a closer look at how organizations can avoid putting their data at risk when using LLMs.

2. Copyright challenges  

LLMs are trained on large quantities of data. But that information is often scraped from the web, without the explicit permission of the content owner. That can create potential copyright issues if you go on to use it. However, it can be difficult to find the original source of specific training data, making it challenging to mitigate these issues.

3. Insecure code

Developers are increasingly turning to ChatGPT and similar tools to help them accelerate time to market. In theory it can help by generating code snippets and even entire software programs quickly and efficiently. However, security experts warn that it can also generate vulnerabilities. This is a particular concern if the developer doesn’t have enough domain knowledge to know what bugs to look for. If buggy code subsequently slips through into production, it could have a serious reputational impact and require time and money to fix.

4. Hacking the LLM itself

Unauthorized access to and tampering with LLMs could provide hackers with a range of options to perform malicious activities, such as getting the model to divulge sensitive information via prompt injection attacks or perform other actions that are supposed to be blocked. Other attacks may involve exploitation of server-side request forgery (SSRF) vulnerabilities in LLM servers, enabling attackers to extract internal resources. Threat actors could even find a way of interacting with confidential systems and resources simply by sending malicious commands through natural language prompts.

RELATED READING: Black Hat 2023: AI gets big defender prize money

As an example, ChatGPT had to be taken offline in March following the discovery of a vulnerability that exposed the titles from the conversation histories of some users to other users. In order to raise awareness of vulnerabilities in LLM applications, the OWASP Foundation recently released a list of 10 critical security loopholes commonly observed in these applications.

5. A data breach at the AI provider

There’s always a chance that a company that develops AI models could itself be breached, allowing hackers to, for example, steal training data that could include sensitive proprietary information. The same is true for data leaks – such as when Google was inadvertently leaking private Bard chats into its search results.

What to do next

If your organization is keen to start tapping the potential of generative AI for competitive advantage, there are a few things it should be doing first to mitigate some of these risks:

  • Data encryption and anonymization: Encrypt data before sharing it with LLMs to keep it safe from prying eyes, and/or consider anonymization techniques to protect the privacy of individuals who could be identified in the datasets. Data sanitization can achieve the same end by removing sensitive details from training data before it is fed into the model.
  • Enhanced access controls: Strong passwords, multi-factor authentication (MFA) and least privilege policies will help to ensure only authorized individuals have access to the generative AI model and back-end systems.
  • Regular security audits: This can help to uncover vulnerabilities in your IT systems which may impact the LLM and generative AI models on which its built.
  • Practice incident response plans: A well rehearsed and solid IR plan will help your organization respond rapidly to contain, remediate and recover from any breach.
  • Vet LLM providers thoroughly: As for any supplier, it’s important to ensure the company providing the LLM follows industry best practices around data security and privacy. Ensure there’s clear disclosure over where user data is processed and stored, and if it’s used to train the model. How long is it kept? Is it shared with third parties? Can you opt in/out of your data being used for training?
  • Ensure developers follow strict security guidelines: If your developers are using LLMs to generate code, make sure they adhere to policy, such as security testing and peer review, to mitigate the risk of bugs creeping into production.

The good news is there’s no need to reinvent the wheel. Most of the above are tried-and-tested best practice security tips. They may need updating/tweaking for the AI world, but the underlying logic should be familiar to most security teams.

FURTHER READING: A Bard’s Tale – how fake AI bots try to install malware

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?