Getty Images/iStockphoto

Tip

How to identify and prevent insecure output handling

Sanitation, validation and zero trust are essential ways to reduce the threat posed by large language models generating outputs that could cause harm to downstream systems and users.

Generative AI is becoming a staple of workflows in many organizations. Users are creating content and making decisions using new tools based on the outputs of large language models, or LLMs. Yet, these new tools usher in new risks that must be identified, assessed and managed.

One concern in particular is insecure output handling. Read on to learn what insecure output handling is, what causes it and how to prevent it.

What is insecure output handling?

Insecure output handling is the failure to validate or sanitize LLM-generated outputs before they are used by other systems or users. Without proper validation or control, these outputs can then propagate false information (hallucinations) or introduce security vulnerabilities or harmful content.

Insecure output handling can result in new threats, which range from reputational degradation to software vulnerabilities -- possibly opening up further cybersecurity risks.

What causes insecure output?

Insecure output is an outgrowth of the way LLMs work. LLMs generate probabilistic -- in other words, random -- outputs rather than deterministic ones. Each time a prompt is given to an LLM, it generates a different response, even if the prompt is the same. Therefore, there is no guarantee what the response will be, given any particular prompt.

With their probabilistic nature, LLMs produce responses that vary in accuracy or appropriateness, based on their specific training and the input prompt received. Without the right safeguards, this variability can be intentionally exploited or unintentionally cause harm.

Following are three categories of how insecure output is generated:

  1. Hallucinations. Hallucinations occur when the model generates information that is factually incorrect or entirely fabricated. These hallucinations can mislead users or systems, resulting in flawed decision-making or incorrect actions. If the generated output is not properly verified, hallucinations can propagate misinformation or errors into a system.
  2. Training data bias. If the data set used to train the model includes biases, those biases could be reflected in the output. This can lead to discriminatory or unfair outcomes. For example, if an LLM is only trained on the writings of Virginia Woolf (a stream of consciousness writer), it will be difficult for the model to generate texts that read like Ernest Hemingway (a punchy writer). Therefore, if an organization relies on an LLM that isn't trained with data that's geared to the types of questions being asked, the results could introduce risks into downstream systems.
  3. Input manipulation. Input manipulation occurs when malicious actors craft specific inputs designed to provoke the LLM to produce unsafe, incorrect or harmful outputs. Known as prompt injection attacks, these manipulations can exploit the model's sensitivity to certain input patterns to generate outputs that should not be trusted or used by downstream systems.

How to prevent insecure output handling

Insecure output handling poses a serious risk as LLM use continues to increase. Prevention depends on a multilayered approach that includes the following two key measures:

  1. Employ a zero-trust approach to LLM output. Adopting a zero-trust model treats every LLM output as potentially harmful until it is explicitly validated. This ensures systems and users do not place blind trust in LLM outputs.
  2. Validate and sanitize. Implement stringent validation and sanitation mechanisms to ensure the model's output aligns with known facts, acceptable formats and safety requirements. These checks cover both known-good and known-bad variants. Validation and sanitization also include running any LLM-generated code through a rigorous testing process that includes a human review.

Matthew Smith is a vCISO and management consultant specializing in cybersecurity risk management and AI.

Dig Deeper on Threats and vulnerabilities