Recently Viewed Topics
Upgrade the LCE Server
The following table lists the upgrade paths for the LCE server with links to release notes as well as the compatible versions of Tenable.sc and LCE clients. If you have a version of LCE that does not appear in the From... column corresponding to the version you are trying to upgrade to, you must first upgrade to an intermediate version. For example, if you were currently using 4.4.x, you would first need to upgrade to 4.8 before upgrading to 5.0.
|Upgrade to||From||Compatible versions of Tenable.sc|
|5.1.1||4.8.x, 5.0.x, 5.1.x||Tenable.sc version 5.1 or later.|
|5.0.x||4.8.x||Tenable.sc version 18.104.22.168 or later.|
|4.8.1||4.8, 4.6.x||Tenable.sc version 22.214.171.124 or later.|
|4.8||4.6.x, 4.4.x||Tenable.sc version 126.96.36.199 or later.|
|4.6.1||4.6, 4.4.x||Tenable.sc version 188.8.131.52 or later.|
|4.6||4.4.x||Tenable.sc version 184.108.40.206 or later.|
|4.4.1||4.4, 4.2.2||Tenable.sc version 220.127.116.11 or later.|
|4.4||4.2.2||Tenable.sc version 18.104.22.168 or later.|
LCE will work with older versions of Tenable.sc than those listed, but some new features will not be supported.
Before You Begin
Caution: When upgrading to LCE 5.0, review the updated system requirements. In order to utilize LCE 5.0, your system will require about twice the previous minimum disk space, and about 33% more computing power and RAM. It is not recommended that you upgrade a system that is already operating at maximum capacity while utilizing an older version of LCE.
- If upgrading to 5.0, install JRE.
- If upgrading to 5.0, install Elasticsearch.
- Download the LCE server package from the Tenable Downloads page.
The following procedure must be performed as the root user.
To upgrade, enter the following command:
rpm -Uvh <package name>, where <package name> is the name of the LCE server package you downloaded from the Tenable Downloads Page. You do not need to stop the LCE server before upgrading.
Preparing... ########################################### [100%]
1:lce warning: /opt/lce/.ssh/authorized_keys created as /opt/lce/.ssh/authorized_keys.rpmnew
Moving deprecated file lce.conf to /opt/lce/tmp; OK to delete it once upgrade succeeds.
Moving deprecated file feed.cfg to /opt/lce/tmp; OK to delete it once upgrade succeeds.
Moving deprecated file rules.conf to /opt/lce/tmp; OK to delete it once upgrade succeeds.
Moving deprecated file excluded_domains.txt to /opt/lce/tmp; OK to delete it once upgrade succeeds.
Moving deprecated file trusted_plugins.txt to /opt/lce/tmp; OK to delete it once upgrade succeeds.
Moving deprecated file hostlist.txt to /opt/lce/tmp; OK to delete it once upgrade succeeds.
Moving deprecated file untracked_usernames.txt to /opt/lce/tmp; OK to delete it once upgrade succeeds.
Moving deprecated file disabled-tasls.txt to /opt/lce/tmp; OK to delete it once upgrade succeeds.
Moving deprecated file disabled-prms.txt to /opt/lce/tmp; OK to delete it once upgrade succeeds.
Moving deprecated file sampleable_tasls.txt to /opt/lce/tmp; OK to delete it once upgrade succeeds.
Moving deprecated file syslog_sensors.txt to /opt/lce/tmp; OK to delete it once upgrade succeeds.
The installation process is complete.
Please refer to /var/log/lce_upgrade.log to review installation messages.
To configure LCE, please direct your browser to:
After the upgrade, changes to the LCE configuration will be done using the LCE interface. To access the LCE interface, navigate to the IP address or hostname of the LCE server over port 8836 (https://<ip address or hostname>:8836). The previous configuration files are stored in
/opt/lce/tmp and may be deleted once the upgrade is determined to be successful.
Additional Steps for 5.0
After upgrading the server to 5.0, you must also migrate data from your silos to Elasticsearch databases using a tool included with the LCE 5.0 package. After validating that there are no issues with the databases, you can then use the same tool to remove the old silos.
The migrate utility is
/opt/lce/tools/migrateDB-overseer; this utility can run multiple migrate tasks in parallel, so migration overall is completed faster.
The supported operations are shown in the table below:
||Estimates how much disk space your 5X silos will need, once migrated into 6X datastore; note, this estimate does not account for events created "live" by LCE in the course of its normal operation while migration is running. If needed it will remind you to give the
||Shows conservative estimates for how long the migration will take for each plausible nParallelWorkers value. Also shows what nParallelWorkers value will be chosen by default.|
||If you do not specify
Note: While a higher value means a faster migration, it also means less resources will remain for normal LCE operation.
||Use this option at any time, from another shell console, to see how migration is progressing.|
Tip: It is also possible to explicitly invoke migration of one silo at a time, with
--migrate-one <Elasticsearch_siloId> <tEarliest> <tLatest> command. This approach, however, cannot provide automatic undo in event of failure, nor guards against event loss or progress bookmarking for correct resumption after premature termination. It is strongly recommended you employ the
/opt/lce/tools/migrateDB-overseer --migrate-all command. With the
--migrate-all option, the silos with the most recent events will be migrated first, followed by older silos. If your SSH console session times out after you start migrateDB-overseer from it, the migration will stop (and you need to start it again later); to avoid that, start migrateDB-overseer in console-detached mode:
nohup /opt/lce/tools/migrateDB-overseer &
nohup /opt/lce/tools/migrateDB-overseer --clear-source-on-success &
To migrate silos, enter the following command: /opt/lce/tools/migrateDB-
||Data from all existing silos will be migrated into Elasticsearch databases.|
||Migrates data from a silo to an Elasticsearch database, where <silo_number> is the silo number that you want to migrate.|
||Migrates an archived silo and log store.|
||Lists silos containing NDB and LDB data.|
||Lists Elasticsearch databases.|
To remove old silos, enter the following command:
ES. The following table describes the arguments that can be used with the tool.
||All existing NDB/LDB silos will be removed.|
||Removes a specific NDB/LDB silo, where <silo_number> is the number of the silo that you want to remove.|