Connector Logs

The following is not supported in Tenable FedRAMP Moderate environments. For more information, see the Tenable FedRAMP Product Offering.

Tenable Exposure Management allows you to connect with third-party tools through connectors. To ensure visibility into sync operations, the platform includes a detailed Sync Log for each connector.

The Connector Logs in Tenable Exposure Management provide detailed insights into the processing lifecycle of your connectors.

You can use it to:

  • Track sync history and progress

  • Understand sync stages and timing

  • Identify failed syncs and troubleshoot issues

  • Filter log entries by Activity type, Data Stage, or Log Level.

Access Connector Logs

To view connector's logs:

  1. Navigate to the Connectors page.

  2. In the table, for the connector for which you want to view logs, click Show logs.

    The connector logs appear.

Reading Connector Logs

The connector logs are presented in a user-friendly table format. Each sync is listed, making distinguishing between different sync cycles easy. Expand a sync listing to access more details, data lifecycle stages, and any warnings or additional sync information.

  • The table retains data for a 14-day period, offering extensive visibility into your sync history.

  • The logs present the sync start time, data and processing durations, status, and type.

Log Time Stamp

Log timestamps use your local browser time zone to ensure alignment with your environment.

Activity (Sync) Type

Tenable Exposure Management's data synchronization process with vendor connectors can be categorized into two distinct types: Full Sync and Incremental Sync.

  • Full Sync: A Full Sync pulls all available data from the vendor’s system and ingests it into Exposure Management. This ensures that the platform has a complete, up-to-date view of the data supported by the connector.

  • Incremental Sync: An Incremental Sync retrieves only new or changed records since the last successful sync. This method is used with connectors that support segmented or delta-based data retrieval, improving performance and reducing API consumption.

Data Stage (Sync Status)

Each connector sync is assigned a high-level status.

This status appears in the Connector's log.

Data Stage

Description Notes
Done
  • The sync completed successfully.

  • All lifecycle stages ran to completion without errors.
  • Data is now updated and visible in Inventory, Weaknesses, and Analytics.
You can confirm data was successfully ingested and reflected.
In Progress
  • The sync is currently running.

  • One or more stages (e.g., Fetching, Normalization, or Processing) is still active.

  • The log updates in real time as the sync progresses.

Check logs live to monitor what stage the sync is currently in and how long it's taking.
Failed
  • The sync did not complete successfully.

  • One or more stages failed (most commonly Connectivity Test, Fetching, or Processing).

  • Error details appear in the Sync Log entry, including: Error message, Timestamp, Affected stage, and root cause/suggestion for next step (if available)

Open the Sync Log and review the failed stage to troubleshoot.

Data Lifecycle Stages

Each connector sync goes through a structured set of backend stages. These stages are visible when expanding a sync log, allowing you to understand what happens during each part of the sync and where failures may occur.

Stage

Description Notes
Initiating
  • Begins the sync process.

  • Validates sync configuration and prepares the connector for execution.

  • Triggers a connectivity test (same logic as the manual connectivity check in the connector settings).

  • Logs basic metadata (e.g., sync type, trigger source).

Possible Failures: Invalid credentials, expired tokens, network/authorization errors.

 

 

Connectivity Test
  • Confirms the platform can reach and authenticate with the vendor’s API.

  • Verifies endpoint availability and token validity.

  • Stops the sync if the vendor system is unavailable or responds with an error (e.g., 403, 500).

This step is critical for establishing trust before any data is pulled.

Possible Failures:

  • Request issues (missing vendor info/internal errors).

  • Permission or credentials issues.

  • Network or server errors on the vendor's side.

Fetching
  • Calls the vendor’s API to pull asset, vulnerability, or configuration data.

  • The volume and duration of this step vary based on: API rate limits Data size Type of fetch (Full vs. Incremental)

  • Data is retrieved as raw JSON from the vendor system.

This is often the longest stage, especially for high-volume connectors (e.g., cloud or endpoint sources).

Possible Failures:

  • Invalid status codes, response headers, or formats.

  • Vendor's server or network issues.

Normalizing
  • Transforms raw vendor data into Tenable’s internal standard format.

  • Applies translation logic to fields (e.g., IP, hostname, asset metadata, vulnerability schema).

  • Discards malformed or unusable records.

  • Tags records for traceability to original source and sync cycle.

The purpose of this stage is to ensure consistency across all data sources for unified presentation in Inventory, Exposure View, and Analytics.

Possible Failures:Internal issues in syncing data.

Processing

Enriches the normalized data with:

  • Deduplication logic

  • Asset merging (if multiple sources report the same entity)

  • Tag applications (e.g., host groups, source tags)

  • Risk metadata attachment (e.g., Exposure Score calculation)

Connects findings to asset records.

This is the final staging layer before data becomes visible in the platform.

Possible Failures:

  • Internal processing issues.