This solution will work with Openshift 3 and Openshift 4.
Every now and again you might receive an alert from Openshift that the Elasticsearch instances are running out of storage or have
Disk Low Watermark Reached at elasticsearch-cdm-sdv1dv6c-2 node in elasticsearch cluster. Shards can not be allocated to this node anymore. You should consider adding more disk to the node.
If this happens you may wish to delete some old data. This article will show you how to do it from the command line
Steps
Connect to the Openshift cluster and go to the openshift-logging project
$ oc project openshift-logging
Get the pods
$ oc get po NAME READY STATUS RESTARTS AGE cluster-logging-operator-86986795cf-nt5ft 1/1 Running 0 2d1h curator-1625110200-sdnwk 0/1 Completed 0 5h33m elasticsearch-cdm-sdv1dv6c-1-656b575889-fkkjp 2/2 Running 0 2d1h elasticsearch-cdm-sdv1dv6c-2-688b464fdc-x4sjd 2/2 Running 0 2d1h elasticsearch-cdm-sdv1dv6c-3-587484d497-q96m8 2/2 Running 0 2d1h fluentd-225tz 1/1 Running 0 2d fluentd-4tn2h 1/1 Running 0 2d fluentd-7b9qk 1/1 Running 0 2d fluentd-7jkw2 1/1 Running 0 2d fluentd-bhn4l 1/1 Running 0 2d fluentd-bp8zb 1/1 Running 0 2d fluentd-k6z52 1/1 Running 0 2d fluentd-mxnsk 1/1 Running 0 2d fluentd-n8crm 1/1 Running 0 2d fluentd-qdfxk 1/1 Running 0 2d fluentd-rlhhf 1/1 Running 0 2d fluentd-w6j7z 1/1 Running 0 2d kibana-74d9668bcd-wxggf 2/2 Running 0 2d1h
Shell into any of the elasticsearch* pods
$ oc rsh elasticsearch-cdm-sdv1dv6c-1-656b575889-fkkjp Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-sdv1dv6c-1-656b575889-fkkjp -n openshift-logging' to see all of the containers in this pod. sh-4.2$ df -h Filesystem Size Used Avail Use% Mounted on overlay 120G 96G 25G 80% / tmpfs 64M 0 64M 0% /dev tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup shm 64M 0 64M 0% /dev/shm tmpfs 7.9G 5.4M 7.9G 1% /etc/passwd /dev/sdb 196G 164G 33G 84% /elasticsearch/persistent /dev/mapper/coreos-luks-root-nocrypt 120G 96G 25G 80% /etc/hosts tmpfs 7.9G 28K 7.9G 1% /etc/openshift/elasticsearch/secret tmpfs 7.9G 28K 7.9G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 7.9G 0 7.9G 0% /proc/acpi tmpfs 7.9G 0 7.9G 0% /proc/scsi tmpfs 7.9G 0 7.9G 0% /sys/firmware
Here we can see that /dev/sdb only has 33G free
Next, list the indices. To do this we have to use the keys, certs, and CA certs that was created by Openshift
sh-4.2$ curl -s -k --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert --cacert /etc/elasticsearch/secret/admin-ca https://localhost:9200/_cat/indices
This will list all the indices in your cluster along with the usage
Here below I only want to see the indices for the operation portion
sh-4.2$ curl -s -k --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert --cacert /etc/elasticsearch/secret/admin-ca https://localhost:9200/_cat/indices | grep operations green open .operations.2021.06.30 xB9dBK3JQrqrVeHeOvM1-g 3 1 37533921 0 74gb 37gb green open .operations.2021.07.01 F4fKyS0pRzeP4BNDqIZ9gA 3 1 14296675 0 29gb 14.5gb green open .operations.2021.06.18 t2z4GdVDSwq3odY3DBnp8g 3 1 1883 0 2.8mb 1.4mb green open .operations.2021.06.28 08Ss8Kx9S9-QlpBF8B0qBw 3 1 26558910 0 47.7gb 23.8gb green open .operations.2021.06.27 6wWBRD1_SAi99XpS2dLliQ 3 1 8107800 0 14.2gb 7.1gb green open .operations.2021.06.17 AfvzFdZiREmPqWBLWChSvA 3 1 577192 0 767.9mb 383.9mb green open .operations.2021.06.16 01eNOMUvSEeuHNzQJ_1elA 3 1 282418 0 387.1mb 193.5mb green open .operations.2021.06.13 HMo0NMsOQniUu_B3WaGIaw 3 1 242137 0 320.9mb 160.4mb green open .operations.2021.06.21 YbkHsAChTmy5hKlPpwNskw 3 1 481899 0 910.8mb 455.4mb green open .operations.2021.06.29 C9V8Ta8kTfqnIgHjkO-law 3 1 36386190 0 69.7gb 34.8gb green open .operations.2021.06.20 46g2nwAPR7KDv2HgD9oQYw 3 1 5392 0 7.5mb 3.7mb green open .operations.2021.06.15 _Xa7J_QhRFe69-e5K77t1w 3 1 286327 0 389.3mb 194.6mb green open .operations.2021.06.12 OGZGV1n9Rw6XnlCrWAQEtQ 3 1 122369 0 170.6mb 85.3mb green open .operations.2021.06.11 3GmiK4qtTlq5pa_hS3QzhA 3 1 93745 0 124.1mb 62mb green open .operations.2021.06.19 5s7qFnefT12T009E0v2EVQ 3 1 5408 0 7.4mb 3.7mb green open .operations.2021.06.14 yTS7vFXASX27DbAu4Gq0QQ 3 1 295959 0 401.6mb 200.8mb green open .operations.2021.06.10 NJbZ10y8RV6UqxlhXqqscQ 3 1 75555 0 95.9mb 47.9mb
To delete indices you would running the following command.
curl -s -k --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert --cacert /etc/elasticsearch/secret/admin-ca -XDELETE https://localhost:9200/.operations.2021.06.10 {"acknowledged":true}
In the above command I delete the indice: .operations.2021.06.10