Log Insight: Migrate data from one node to another
There are some occasions when you may want to migrate the data between nodes inside a vRealize Log Insight Cluster. An example could be if one of the nodes in the cluster is broken for any reasons (bugs, application problems, accidental deletion of vital configuration files, file corruption, etc.)
Disclaimer: the following troubleshooting steps may cause data loss. Proceed at your own risk only you know what you are doing.
Restore log events from the old Cluster to the new Cluster.
-
Stop the loginsight service on the old Cluster on every node:
1service loginsight stop
-
Move the logs from the old nodes to the new corresponding nodes (old 1 to new 1, old 2 to new 2, ecc):
1scp -r /storage/core/loginsight/cidata/store <newnode>:/storage/core/loginsight/cidata 2scp -r /storage/core/loginsight/cidata/store/*-*-*-*-* <newnode>:/storage/core/loginsight/cidata
Replace
<newnode>
with the IP address or FQDN of the new corresponding node in the new Cluster -
Log into the new Clsuter nodes as root via SSH or Console.
-
Stop the loginsight service on the new Cluster on every node:
1service loginsight stop
-
On the new Cluster node, import the logs node by node:
1for bucket in $(ls /storage/core/loginsight/cidata/store | grep -v 'generation\|buckets\|strata_write.lock'); do echo y | /usr/lib/loginsight/application/sbin/bucket-index add $bucket --statuses archived; done
The Log Insight service must be stopped before running bucket-index.
The
--statuses archived
implies the bucket is sealed and no further data will be added to it. To check all buckets have been added correctly:1/usr/lib/loginsight/application/sbin/bucket-index show
-
On the new Cluster nodes, start the loginsight service:
1service loginsight start
You should now be able to access the historical data from the UI.
comments powered by Disqus