Tenable AI Exposure Metrics

The following metrics are used to assess data within Tenable AI Exposure:

Issue and Finding Severity

Issues and Findings are categorized into severity categories based on the expected potential security risk to your business.

Severity Category Business Risk
Critical

The highest level of issue, representing a clear and active threat with severe consequences.

  • Examples: Confirmed leakage of credentials, full PII/PCI data exfiltration, malicious jailbreak, or compromised AI supply chain

High

A serious issue that strongly indicates malicious activity or exposure of sensitive data.

  • Examples: Attempted prompt injection, partial PII leakage, unauthorized system access attempts.

Medium

An issue with a moderate risk of leading to harmful behavior or data exposure if left unchecked.

  • Examples: Suspicious prompt attempts, minor data exposure of non-sensitive info, weak access control.

Low

A minor issue with little to no immediate security impact.

  • Examples: Harmless prompt misuse, low-confidence anomaly, minor policy violation.

User Risk

Users are be categorized based on the expected potential risk they present to your organization.

Severity Category User Risk
Critical

User activity represents a direct and active threat to AI security, compliance, and business integrity.

  • Examples: Deliberate attempts to exfiltrate confidential data (PII, PCI, credentials), uploading or extracting executive communications, or HR/finance/legal data, successfully bypassing safeguards to cause harmful or unauthorized outputs.

High

User behavior indicates serious attempts to bypass AI security controls or expose sensitive data.

  • Examples: Attempting prompt injection or malicious jailbreaks, sharing sensitive business information (employee data, internal strategy).

Medium

User behavior shows moderate potential for security or compliance issues. Could escalate if repeated or combined with other actions.

  • Examples: Prompts that probe for restricted outputs (but don’t succeed, sharing non-critical business information with an AI model.

Low

User activity poses minimal security risk, with little chance of leading to sensitive data exposure or harmful outcomes.

  • Examples: Entering harmless prompts, minor misuses of AI with no sensitive content.

Policy and Rule Severity

Policy and Rule severities are user defined, and can be configured in the following locations:

  • Policy Severity — Via the Edit Policy page. For more information, see Edit a Policy.

  • Rule Severity — Via the Edit Rule page. For more information, see Edit a Policy Rule.

Severity Category Description
Critical

The highest risk level, representing a clear and present security threat with significant potential impact (legal, financial, or reputational).

  • Examples: Confirmed leakage of credentials, financial data, employee PII, or malicious AI-assisted exploit execution.

High

A serious risk event where the detection strongly indicates a security violation or policy breach that could cause harmful output, sensitive data exposure, or exploitation.

  • Examples: Confirmed prompt injection, access key leakage, PII exfiltration attempts, or malicious jailbreak attempts.

Medium

A moderate risk event where the issue could potentially expose sensitive data or enable harmful behavior if not addressed.

  • Examples: Suspicious prompts attempting mild content filter evasion, attempts to query sensitive data without direct access.

Low

A minor risk event where the detected issue poses limited or no immediate security impact.

  • Examples: Benign misuse of prompts, low-confidence suspicious text, or non-sensitive metadata exposure.

Policy and Rule Sensitivity

Policy and Rule sensitivities are user defined, and can be configured in the following locations:

  • Policy Sensitivity — Via the Edit Policy page. For more information, see Edit a Policy.

  • Rule Sensitivity — Via the Edit Rule page. For more information, see Edit a Policy Rule.

Sensitivity Level Description
High

A stricter rule setting where AI systems are tuned to detect and block even subtle or low-confidence signs of malicious or harmful content.

This sensitivity level:

  • Prioritizes maximum safety and risk prevention (minimizing false negatives).

  • Is ideal for high-security environments (e.g., financial services, healthcare, or sensitive enterprise AI deployments) where even minor risks of data leakage or harmful output are unacceptable.

  • Increases the likelihood of false positives, sometimes blocking safe interactions if they resemble risky patterns.

Balanced

A moderation or detection setting where AI security rules aim to strike a balance between accuracy and usability — reducing both false positives (overblocking safe content) and false negatives (missing harmful content).

This sensitivity level:

  • Is ideal for environments where moderate risk tolerance exists.

  • Is suitable for everyday AI deployments where user experience and security must both be considered.

  • Helps avoid overblocking harmless user prompts, while still catching most harmful attempts (e.g., data exfiltration, malicious jailbreaks).