If you want to deploy agents across a large-scale environment, your deployment strategy must ensure that all agents are continuously active and stay connected to Tenable.io or Nessus Manager.
When deploying many agents, consider using software to push agents through the network. For example:
For Nessus Agents before version 7.4.2, you should deploy batches of agents over a period of 24 hours when deploying a large number of agents. This prevents the agents from attempting a full plugin set update at the same time. After you install an agent and the agent gets its first plugin update, it sets its timer to attempt the next update 24 hours from that time. As a result, if you deploy 10,000 agents all at once, all of those agents would attempt a full plugin set download at the same time each day, resulting in an excessive amount of bandwidth utilization. Refer to Plugin Updates for more information on plugin update timeframes.
For Nessus Agents 7.4.2 and later, an agent links to Nessus Manager or Tenable.io after a random delay ranging from zero to five minutes. This delay occurs when the agent initially links, and also when the agent restarts either manually or through a system reboot. Enforcing a delay reduces network traffic when deploying or restarting large amounts of agents, and reduces the load on Nessus Manager or Tenable.io.
With Nessus Manager clustering, you can deploy and manage large numbers of agents from a single Nessus Manager instance. For Tenable.sc users with over 10,000 agents and up to 200,000 agents, you can manage your agent scans from a single Nessus Manager, rather than needing to link multiple instances of Nessus Manager to Tenable.sc.
A Nessus Manager instance with clustering enabled acts as a parent node to child nodes, each of which manage a smaller number of agents. Once a Nessus Manager instance becomes a parent node, it no longer manages agents directly. Instead, it acts as a single point of access where you can manage scan policies and schedules for all the agents across the child nodes. With clustering, you can scale your deployment size more easily than if you had to manage several different Nessus Manager instances separately.
Example scenario: Deploying 100,000 agents
You are a Tenable.sc user who wants to deploy 100,000 agents, managed by Nessus Manager.
Without clustering, you deploy 10 Nessus Manager instances, each supporting 10,000 agents. You must manually manage each Nessus Manager instance separately, such as setting agent scan policies and schedules, and updating your software versions. You must separately link each Nessus Manager instance to Tenable.sc.
With clustering, you use one Nessus Manager instance to manage 100,000 agents. You enable clustering on Nessus Manager, which turns it into a parent node, a management point for child nodes. You link 10 child nodes, each of which manages around 10,000 agents. You can either link new agents or migrate existing agents to the cluster. The child nodes receive agent scan policy, schedule, and plugin and software updates from the parent node. You link only the Nessus Manager parent node to Tenable.sc.(missing or bad snippet)
For more information, see Clustering in the Nessus User Guide.
Tenable recommends that you size agent groups appropriately, particularly if you are managing scans in Nessus Manager or Tenable.io and then importing the scan data into Tenable.sc. You can size agent groups when you manage agents in Nessus Manager or Tenable.io.
The more agents that you scan and include in a single agent group, the more data that the manager must process in a single batch. The size of the agent group determines the size of the .nessus file that must be imported into Tenable.sc. The .nessus file size affects hard drive space and bandwidth.
|Product||Agents Assigned per Group|
Unlimited agents per group if not sending to Tenable.sc
1,000 agents per group if sending to Tenable.sc
Unlimited agents per group if not sending to Tenable.sc
20,000 agents per group if sending to Tenable.sc
|Nessus Manager Clusters||Unlimited since scans are automatically broken up as appropriate by separate child nodes.|
Before you deploy agents to your environment, create groups based on your scanning strategy.
The following are example group types:
Asset Type or Location
You can also add agents to more than one group if you have multiple scanning strategies.
Scan Profile Strategy
Once you deploy agents to all necessary assets, you can create scan profiles and tie them to existing agent groups. The section below describes a few scan strategies.
Operating System Scan strategy
The following strategy is useful if your scanning strategy is based off of the operating system of an asset.
Basic Agent Scan - Linux
In this example, a scan is created based on the Basic Agent Scan template, and is assigned the group Amazon Linux, CentOS, and Red Hat. This scan only scans these assets.
Asset Type or Location Scan Strategy
The following strategy is useful if your scanning strategy is based off of the asset type or location of an asset.
Basic Agent Scan - Production Servers
In this example, a scan is created a scan based on the Basic Agent Scan template, and is assigned the group Production Servers. This scan only scans production server assets.
Basic Agent Scan - Workstations
In this example, a scan is created based on the Basic Agent Scan template, and is assigned the group Workstations. This scan only scans workstation assets.
While scans with the Nessus Agents are more efficient in many ways than traditional network scans, scan staggering is something to consider on certain types of systems.
For example, if you install Nessus Agents on virtual machines, you may want to distribute agents among several groups and have their associated scan windows start at slightly different times.
Staggering scans limits the one-time load on the virtual host server, because agents run their assessments as soon as possible at the start of the scan window. Oversubscribed or resource-limited virtual environments may experience performance issues if agent assessments start on all systems at the same time.