OWASP Top 10 for Large Language Model(LLM) Applications & Generative AI

GenAI relies heavily on LLM to process prompts from users.
The OWASP® Foundation has identified its top 10 potential threats present when using LLM.
Each item in the list is tagged with the OWASP Top 10 code beginning with LLM01 until LLM10.


The following briefly details outbound sensitive data threats identified by the OWASP Top 10 that are protected by IWS:


  • LLM01:2025 Prompt Injection

    Manipulating LLMs via crafted inputs can lead to unauthorized access, data breaches, and compromised decision-making


  • LLM02:2025 Sensitive Information Disclosure

    Failure to protect against disclosure of sensitive information in LLM outputs can result in legal consequences or a loss of competitive advantage.


  • LLM05:2025 Improper Output Handling

    Neglecting to validate LLM outputs may lead to downstream security exploits, including code execution that compromises systems and exposes data.

  • LLM06:2025 Excessive Agency

    Granting LLMs unchecked autonomy to take action can lead to unintended consequences, jeopardizing reliability, privacy, and trust.


  • LLM07:2025 System Prompt Leakage

    Exploiting system prompts or instructions can lead to direct leakage of sensitive data by the LLM.



Threats Targeting GenAI Deployment Using LLMs

While using third-party or open-source GenAI LLMs, every Business Entity (BE) will need to manage the attendant supply chain and third-party risks.
For example, malicious open-source models or open-source models with poor security controls may introduce vulnerabilities or backdoors, which could also result in data leaks.


Data Leakage from GenAI Deployment by Employees

BEs that allow employees to use publicly accessible GenAI tools (e.g. ChatGPT and Gemini) could be subject to potential common data leak scenarios:

  • Employees submit or upload sensitive data while using those tools.
  • BEs could also be exposed to data leakage risks through unauthorised insider actions and improper data handling when using the GenAI solutions.

Data Leaks from Prompt Injection Attacks Against GenAI Solution(s)

In a prompt injection attack, the following are commonly observed scenarios:

  • The threat actor provides a malicious input to lead the GenAI model into revealing sensitive information in the generated response.
  • Instances of jailbreak attacks, where the threat actor uses specially crafted prompts to bypass the security controls and safeguards implemented to make the GenAI solution disclose sensitive information.

Consequences of Data Leakage from GenAI Deployment

Where such data leaks involve confidential information and intellectual property, the BE could end up facing:

  • Legal consequences
  • Regulatory consequences
  • Reputational consequences

General Countermeasures Recommended for DLP

Current commonly deployed DLP measures are:

  • implementations of purpose-built firewalls for GenAI models that analyse user inputs to detect any attempts to extract data or exploit the GenAI solutions.
  • DLP controls to check for any sensitive data provided in the prompts, as well as the responses generated by their GenAI solutions.

DLP Outbound Security Gap

  • GenAI firewalls and countermeasures alone are currently not fully able to prevent outbound data breaches should malicious compromises occur.
  • IWS provides a crucial last line of defence against compromised outbound sensitive data from GenAI countermeasures failure.


Infotect's IWS GenAI DLP Capabilities

Infotect's patented innovations effectively detects outbound anomalies and provide 24 x 7 protection against:

  • Leakages of personal, medical and financial records.
  • Server configuration information.
  • Malicious or compromised content in public websites and prevent it from being spread to unknowing site visitors.

IWS protection for DLP countermeasures:


Aspect Threats Impact Countermeasures IWS Protection
People
  • Intentional or unintentional data leaks by employees to public GenAI models
  • Loss of customer data/ PII and FI secrets
  • Regulatory consequences and reputational damage
  • Conduct awareness campaigns
  • Implement data classification for data which can be entered into GenAI models
  • Prevents failed countermeasures' compromised sensitive information, found in the web servers' outbound HTTP traffic, from reaching the public through deep detection of sensitive information
  • Once compromised information is detected, IWS will send a HTTP redirect response to the browser, redirecting the visitor to a pre-configured sanitized error page
Process
  • Vulnerabilities or security weaknesses in in-house developed GenAI models
  • Risks of supply chain attack arising from the use of third party or open-source GenAI models
  • Data Leakage leading to loss of sensitive information
  • Backdoors and in-built vulnerabilities
  • Adopt security best practices while developing GenAI models
  • Conduct third-party provided or open-source GenAI model risk assessment
Technology
  • Inability to detect unusual user inputs
  • Bypass of GenAI model guardrails
  • Loss of sensitive information
  • Data Leak of PII
  • Reputational damage
  • Implement DLP tools and firewalls for GenAI models to mitigate loss of confidential data to GenAI models
  • Introduce controls while developing and using the GenAI models
  • Conduct vulnerability assessments and security testing on GenAI models