GenAI relies heavily on LLM to process prompts from users.
The OWASP® Foundation has identified its top 10 potential threats present when using LLM.
Each item in the list is tagged with the OWASP Top 10 code beginning with LLM01 until LLM10.
The following briefly details outbound sensitive data threats identified by the OWASP Top 10 that are protected by IWS:
Manipulating LLMs via crafted inputs can lead to unauthorized access, data breaches, and compromised decision-making
Failure to protect against disclosure of sensitive information in LLM outputs can result in legal consequences or a loss of competitive advantage.
Neglecting to validate LLM outputs may lead to downstream security exploits, including code execution that compromises systems and exposes data.
Granting LLMs unchecked autonomy to take action can lead to unintended consequences, jeopardizing reliability, privacy, and trust.
Exploiting system prompts or instructions can lead to direct leakage of sensitive data by the LLM.
While using third-party or open-source GenAI LLMs, every Business Entity (BE) will need to manage the attendant
supply chain and third-party risks.
For example, malicious open-source models or open-source
models with poor security controls may introduce vulnerabilities or backdoors, which could also
result in data leaks.
BEs that allow employees to use publicly accessible GenAI tools (e.g. ChatGPT and Gemini) could be subject to potential common data leak scenarios:
In a prompt injection attack, the following are commonly observed scenarios:
Where such data leaks involve confidential information and intellectual property, the BE could end up facing:
Current commonly deployed DLP measures are:
Infotect's patented innovations effectively detects outbound anomalies and provide 24 x 7 protection against:
IWS protection for DLP countermeasures:
Aspect | Threats | Impact | Countermeasures | IWS Protection |
---|---|---|---|---|
People |
|
|
|
|
Process |
|
|
|
|
Technology |
|
|
|