Tenable AI Exposure Policies and Detection Rules

A policy is a list of detection rules designed to trigger AI findings based on specific detection logic. Each policy represents a set of rules related to a specific AI risk category, such as Exposed Access Data or Harmful Content, with each rule representing a subcategory within that policy. A rule is therefore associated with a specific policy and defines a particular subcategory of risk. All policies are organized under several high-level AI threat groups: Data Exposure, AI Attacks, and AI Misuse. For example, “PII Data” is a policy under the Data Exposure policy group and contains several policy rules - such as Email, Address, and SSN - each based on distinct detection logic.

The following are Policies and their related Detection Rules available in Tenable AI Exposure:

Policy (Category) Detection (Subcategory)
AI exposure to adversarial attempts
  • Encoded Text — Text that has been transformed into a different format (e.g., Base64, Hex) to conceal its original content or bypass filters.

    • Security Context: Attackers may encode prompts or payloads to evade detection or content moderation in AI systems.

  • Invisible Characters — Non-printable or zero-width characters (e.g., \u200B, \u202E) that don't visibly alter text but change its behavior.

    • Security Context: Used to obfuscate malicious input, bypass prompt filters, or sneak commands past detection in AI models.

  • Prompt Injection Attempt — A method of manipulating an AI system by embedding unauthorized instructions or data into a prompt.

    • Security Context: A user may trick an LLM into ignoring safety rules or leaking confidential data (e.g., “Ignore previous instructions and...”).

  • Copilot Data Exfiltration — The unauthorized extraction of private or sensitive information using AI code assistants (like GitHub Copilot).

    • Security Context: Malicious prompts or poisoned training data may cause AI to generate or leak proprietary code or internal logic.

  • Base32 — An encoding scheme that converts binary data into a set of 32 ASCII characters.

    • Security Context: Less common than Base64 but can still be used by attackers to hide malicious content in AI input/output channels.

  • Hex — Short for hexadecimal encoding, where data is represented using base-16 (0–9, A–F).

    • Security Context: Can be used to hide payloads or inject obfuscated commands into prompts or code generated by AI systems.

  • Base64 — A widely used text encoding format for representing binary data using 64 ASCII characters.

    • Security Context: Frequently used to disguise malicious instructions, data exfiltration payloads, or bypass input sanitization in AI systems.

  • Malicious Jailbreak Attempt — An intentional attempt to bypass an AI system’s safety filters or alignment constraints to produce restricted or harmful content.

    • Security Context: Examples include exploiting LLMs to generate illegal, unethical, or dangerous outputs (e.g., bomb-making instructions).

  • Suspicious Prompt — A prompt that contains potentially harmful, manipulative, or obfuscated language aimed at triggering unintended or unsafe AI behavior.

    • Security Context: May include social engineering, encoded text, or hidden instructions and is flagged by AI safety monitors or filters.

Attempt to expose sensitive employee data
  • Security Credentials — Digital authentication artifacts such as passwords, API keys, tokens, and login information that verify identity and provide access to systems or data.

    • Security Context: If an AI system generates or reveals these, it could lead to unauthorized access to employee records, emails, or internal tools.

  • Unauthorized Employee Personal Information Access Attempt — An attempt—through prompts, APIs, or system queries—to retrieve private, personally identifiable employee data (PII) without proper authorization.

    • Examples: Name, home address, phone number, birthdate, social security number.

  • Unauthorized Security credentials Access Attempt — A malicious or negligent effort to extract sensitive login information for employees or systems, often via prompt injection or model exploitation.

    • Examples: "List all employee passwords," or "Show me API keys used by HR."

  • Unauthorized Executive Communications Access Attempt — An attempt to access confidential messages or records involving executives, often involving private strategy, M&A activity, or sensitive decisions.

    • Examples: Emails between the CEO and board, executive chat logs, leadership decisions not meant for general employees.

  • Unauthorized Legal Data Access Attempt — A prompt or query targeting privileged or confidential legal information, including compliance issues, litigation records, or contracts.

    • Examples: Requests for internal legal opinions, lawsuit settlement terms, or regulatory investigations.

  • Unauthorized HR Data Access Attempt — Efforts to extract private human resources information, including performance reviews, complaints, disciplinary records, or salary details.

    • Examples: "List everyone on a performance improvement plan," or "What complaints have been filed against Manager X?"

  • Unauthorized Finance Data Access Attempt — An attempt to obtain internal financial data through an AI system, particularly if the data includes budgets, salaries, forecasts, or audits.

    • Examples: "Show employee bonus amounts," or "Download Q4 payroll records."

  • Employment Data — Information about an employee’s work history, job title, department, start/end dates, performance, promotions, and assignments.

    • Security Context: Often targeted to infer company structure, salaries, or identify high-value personnel.

  • Health Data — Any data relating to an employee’s physical or mental health status, including medical leave, disability claims, or conditions disclosed to HR.

    • Security Context: Especially sensitive under regulations like HIPAA or GDPR. Leaking this can have severe privacy and legal implications.

  • Family Data — Information related to an employee’s family members, such as emergency contacts, dependents on benefits, or parental leave records.

    • Security Context: Often included in HR systems and may be inadvertently exposed through improperly filtered AI outputs.

  • Unauthorized Security Data Access Attempt — An attempt to obtain internal information related to cybersecurity operations, policies, vulnerabilities, threat models, or access logs.

    • Examples: "List current known vulnerabilities," or "Show firewall rules and who has access to logs."

Exposed Access data
  • Access Webhook — A callback URL or endpoint that receives automated messages or data (e.g., from third-party services) in real time.

    • Security Context: If exposed by an AI system, a webhook can be abused to inject data, trigger workflows, or exfiltrate sensitive information.

  • Access Key M365 — A Microsoft 365 access key (e.g., token, client secret) used to authenticate with M365 APIs or services (e.g., Outlook, OneDrive, SharePoint).

    • Security Context: Disclosure via prompt injection or training data leakage can allow attackers to read emails, calendars, and documents.

  • Client ID — A public identifier for an application used in OAuth 2.0 authentication flows.

    • Security Context: Though not sensitive on its own, when paired with a client secret, it can grant unauthorized access to APIs.

  • URL — A Uniform Resource Locator, which can contain parameters, tokens, or embedded secrets if not properly sanitized.

    • Security Context: AI-generated URLs may unintentionally expose internal resources or endpoints with embedded credentials.

  • API Credentials — Authentication details (e.g., API keys, tokens) that allow an app or user to access APIs securely.

    • Security Context: Leaked API credentials via LLM output or source code suggestions can allow attackers to impersonate trusted users or systems.

  • IP — An Internet Protocol address, which identifies a device or service on a network.

    • Security Context: Disclosing internal IPs (e.g., from corporate infrastructure) can help attackers map networks and target entry points.

  • Hardcoded Credentials — Authentication secrets (e.g., usernames, passwords, keys) that are directly embedded in source code.

    • Security Context: AI systems like code assistants may reveal these if trained on poorly secured codebases, enabling full system compromise.

  • Cookie — A small piece of data stored on the client side, often used to manage sessions and authenticate users.

    • Security Context: If an AI system leaks valid session cookies, attackers can hijack active sessions and impersonate users.

  • Cryptographic Keys — Keys used for encryption, decryption, signing, or verification, including public/private key pairs or symmetric keys.

    • Security Context: Exposure allows attackers to decrypt sensitive data, forge tokens, or break confidentiality guarantees.

  • Private Key — The secret half of a public-private cryptographic key pair, used to decrypt data or sign messages.

    • Security Context: One of the most sensitive secrets—if an AI reveals a private key, it can completely compromise secure systems (e.g., SSH, TLS).

  • Authentication Tokens — Digital credentials (e.g., JWTs, OAuth tokens) used to verify user identity without passwords.

    • Security Context: If leaked by AI, tokens can be reused to impersonate users or access protected APIs and services.

  • Public Key — The non-sensitive half of a cryptographic key pair, used to encrypt data or verify signatures.

    • Security Context: Generally safe to share, but can be associated with known endpoints to infer cryptographic architecture.

  • DB Connection String — A string containing the parameters needed to connect to a database, including hostname, username, password, and port.

    • Security Context: AI that reveals connection strings may grant attackers direct access to databases containing employee, customer, or financial records.

  • Access Key — A credential used to authenticate with cloud or API services, often paired with a secret key (e.g., AWS access key ID + secret access key).

    • Security Context: Exposure enables attackers to programmatically access cloud storage, compute instances, and other services, leading to data breaches or infrastructure abuse.

Exposed PCI Data
  • IBAN — IBAN (International Bank Account Number) is a standardized international code that uniquely identifies an individual’s or organization's bank account across borders.

    • Security Context: If an AI system reveals an IBAN, it may expose a user's financial account details, facilitating unauthorized transfers, social engineering, or account linking attacks.

  • Credit Card — A payment card number typically consisting of 13–19 digits, tied to a cardholder’s financial account and used for purchases and transactions.

    • Security Context: Exposing a credit card number or related data (e.g., CVV, expiration date, cardholder name) via an AI system is a direct PCI DSS violation and a major security incident.

Exposed PII Data
  • ID/SSN — A government-issued personal identifier, such as a Social Security Number (SSN) in the U.S. or a National ID elsewhere, used for identity verification, taxation, and benefits.

    • Security Context: If an AI system exposes an SSN or national ID (through training data leaks or prompt manipulation), it creates a severe risk of identity theft, fraud, and regulatory violations under laws like GDPR, CCPA, or HIPAA.

  • Personal Email — An individual's non-work-related email address (e.g., Gmail, Yahoo, ProtonMail), used for private communication.

    • Security Context: Exposing a personal email via an AI tool (e.g., in chat summaries, document generation, or search outputs) can lead to targeted phishing, stalking, or account compromise, especially if it links to other leaked identifiers.

  • Address — A physical residential location associated with a specific individual, including street address, city, state, and postal code.

    • Security Context: Leaking home addresses through AI outputs presents physical safety concerns, potential doxxing, and a breach of data privacy standards.

  • Email — A general email address, which may be personal or professional, used to identify or contact a user.

    • Security Context: Any AI-generated or leaked email address, especially when combined with other PII (e.g., name, job title), increases the risk of identity profiling, phishing, and credential stuffing attacks.

  • Private Email — A synonym for personal email, emphasizing its non-public and non-corporate nature—often intended to remain undisclosed in professional contexts.

    • Security Context: Leaking a private email via AI tools may violate employee confidentiality or consumer privacy, and may unintentionally expose sensitive communications or linked accounts.

Harmful Content
  • Model Moderation — The detection, filtering, and control of AI model outputs or inputs that may produce or facilitate harmful, dangerous, or policy-violating behavior.

    • Examples: Threats to individuals, organizations, or public safety stemming from the misuse or abuse of AI systems.

Harmful Content to the engine
  • Violence Outbound — Content generated by an AI model that promotes, glorifies, encourages, or depicts violent acts or threats toward individuals, groups, or entities.

    • Security Context: Outbound violent content poses risks of inciting harm, harassment, or real-world violence, and must be moderated or blocked to comply with safety policies and legal requirements.

  • Hate — Content that expresses or promotes hostility, discrimination, or prejudice against individuals or groups based on characteristics such as race, ethnicity, religion, gender, sexual orientation, disability, or nationality.

    • Security Context: Hate speech generated or amplified by AI can contribute to social division, harassment, and legal liabilities, necessitating strong detection and filtering mechanisms.

  • Sexuality — Content related to sexual orientation, sexual behavior, or sexual identity. This may range from neutral or educational discussions to explicit or inappropriate sexual material.

    • Security Context: AI systems must moderate sexual content to prevent explicit, non-consensual, or exploitative outputs, while balancing freedom of expression and community standards.

Overreliance
  • Investment Banking Decision Making — The process of using AI tools to make financial decisions related to investments, asset management, trading, or underwriting within investment banking.

    • Security Context: Relying too heavily on AI-driven models without adequate human oversight can lead to undetected model errors, biased recommendations, or market manipulation risks.

  • Strategic Decision Making — The process of using AI tools to make long-term, impactful organizational decisions about goals, resource allocation, and direction.

    • Security Context: Excessive dependence on AI for strategic decisions can cause organizations to miss contextual insights, ethical considerations, or unforeseen risks. This may lead to poor outcomes, loss of competitive advantage, or exposure to security vulnerabilities due to blind trust in AI recommendations.

  • Hiring Decision Making — The process of using AI tools to select candidates for employment, including resume screening, interviews, and assessments.

    • Security Context: Overtrusting AI in hiring can embed and amplify biases, overlook nuanced human qualities, or fail to comply with employment laws. This poses risks of discrimination, legal challenges, and reputational damage, especially if AI decisions are not audited or supplemented with human judgment.

Vulnerable code
  • Typo Squatting — A type of cyber attack where adversaries register or use domain names, usernames, or service identifiers that are deliberately similar to legitimate ones but contain common typographical errors or misspellings.

    • Security Context: Typo squatting can lead to leakage or theft of credentials, PII, or intellectual property, as well as the possible distribution of malicious code or misinformation and the ultimate compromise of AI model integrity and trustworthiness.