← Deploying the ELK Stack the Right Way

Chapter 9

Quick Reference

In this chapter
<nav id="TableOfContents" aria-label="Chapter sections"> <ul> <li><a href="#file-paths">File Paths</a></li> <li><a href="#ports">Ports</a></li> <li><a href="#essential-commands">Essential Commands</a> <ul> <li><a href="#service-management">Service Management</a></li> <li><a href="#elasticsearch-cluster">Elasticsearch Cluster</a></li> <li><a href="#logstash">Logstash</a></li> <li><a href="#filebeat">Filebeat</a></li> <li><a href="#selinux">SELinux</a></li> <li><a href="#ssl-certificates">SSL Certificates</a></li> </ul> </li> <li><a href="#ansible-variables-quick-reference">Ansible Variables Quick Reference</a></li> <li><a href="#vault-variables">Vault Variables</a></li> <li><a href="#whats-next">What&rsquo;s Next</a></li> <li><a href="#further-reading">Further Reading</a></li> </ul> </nav>

File Paths

FilePurpose
/etc/elasticsearch/elasticsearch.ymlElasticsearch cluster configuration
/etc/elasticsearch/jvm.optionsES JVM heap settings (managed by ES package)
/opt/lib/elasticsearch/Elasticsearch data directory
/var/log/elasticsearch/Elasticsearch logs
/etc/kibana/kibana.ymlKibana configuration
/opt/kibana/lib/Kibana data directory
/etc/httpd/conf.d/ssl.confApache SSL configuration (stock file from mod_ssl)
/etc/httpd/conf.d/webkibana.confKibana reverse proxy config (deployed by playbook; manual path uses kibana.conf)
/etc/logstash/logstash.ymlLogstash node configuration
/etc/logstash/jvm.optionsLogstash JVM heap settings
/etc/logstash/conf.d/logstash.confLogstash pipeline configuration
/opt/lib/logstash/Logstash data directory
/var/log/logstash/Logstash logs
/etc/elasticsearch/certs/Transport TLS certificates (CA + node certs)
/etc/systemd/system/logstash.service.d/timeout.confLogstash TimeoutStopSec override
/etc/filebeat/filebeat.ymlFilebeat configuration
/etc/filebeat/modules.d/system.ymlFilebeat system module fileset config
/var/log/filebeat/Filebeat logs

Ports

PortProtocolServiceDirection
9200TCPElasticsearch REST APIES nodes, Kibana, Logstash
9300TCPElasticsearch transportES nodes only (cluster)
443TCPApache HTTPS (Kibana)Clients (browsers)
5601TCPKibana HTTPLoopback only (Apache proxy)
5044TCPLogstash Beats inputFilebeat/Metricbeat agents
5514TCPLogstash syslog inputrsyslog, network devices
9600TCPLogstash monitoring APIAll interfaces (health checks, monitoring)
5066TCPFilebeat stats APILoopback only (local health checks)

Essential Commands

Service Management

# Check all ELK services
systemctl status elasticsearch kibana httpd logstash filebeat

# Restart a service
systemctl restart elasticsearch

# View recent logs
journalctl -u elasticsearch -n 50 --no-pager
journalctl -u kibana -n 50 --no-pager
journalctl -u logstash -n 50 --no-pager

Elasticsearch Cluster

# All commands require -u elastic:YOUR_PASSWORD when security is enabled

# Cluster health (green/yellow/red)
curl -u elastic:YOUR_PASSWORD -s http://localhost:9200/_cluster/health?pretty

# List nodes
curl -u elastic:YOUR_PASSWORD -s http://localhost:9200/_cat/nodes?v

# List indices (sorted by size)
curl -u elastic:YOUR_PASSWORD -s "http://localhost:9200/_cat/indices?v&s=store.size:desc"

# Cluster disk allocation
curl -u elastic:YOUR_PASSWORD -s http://localhost:9200/_cat/allocation?v

# Check ILM policy status for an index
curl -u elastic:YOUR_PASSWORD -s "http://localhost:9200/<index-name>/_ilm/explain?pretty"

# List all ILM policies
curl -u elastic:YOUR_PASSWORD -s http://localhost:9200/_ilm/policy?pretty

# Clear read-only block (after disk full recovery)
curl -u elastic:YOUR_PASSWORD -X PUT "http://localhost:9200/_all/_settings" \
  -H 'Content-Type: application/json' \
  -d '{"index.blocks.read_only_allow_delete": null}'

Logstash

# Pipeline stats
curl -s http://192.168.1.62:9600/_node/stats/pipelines?pretty

# Test pipeline config syntax
/usr/share/logstash/bin/logstash --config.test_and_exit \
  -f /etc/logstash/conf.d/logstash.conf

# Check listening ports
ss -tlnp | grep -E '5044|9600|5514'

Filebeat

# Check Filebeat stats
curl -s http://127.0.0.1:5066/stats | python3 -m json.tool

# Test Filebeat configuration
sudo filebeat test config

# Test Filebeat output (connectivity to Logstash)
sudo filebeat test output

# View recent Filebeat logs
journalctl -u filebeat -n 50 --no-pager

SELinux

# Check Apache proxy boolean
getsebool httpd_can_network_connect

# Set it (persistent)
setsebool -P httpd_can_network_connect on

# Check recent denials
ausearch -m AVC -ts recent

SSL Certificates

# Check certificate expiry
openssl x509 -in /etc/pki/tls/certs/$(hostname).crt -noout -dates

# Verify cert matches key
openssl x509 -in /etc/pki/tls/certs/$(hostname).crt -noout -modulus | md5sum
openssl rsa -in /etc/pki/tls/certs/$(hostname).key -noout -modulus | md5sum
# Both md5sums should match

Ansible Variables Quick Reference

VariableDefaultDescription
elk_cluster_name"homelab"Elasticsearch cluster name
elk_environment"prod"Environment suffix for cluster and indices
elk_elasticsearch_hosts["localhost"]List of ES node IPs/hostnames
elk_data_path"/opt/lib/elasticsearch"ES data directory
elk_security_enabledtruexpack.security toggle — transport TLS + built-in user auth
elk_monitoring_hosts[]External IPs needing ES API access
elk_kibana_server_name"kibana"Kibana server.name
elk_kibana_data_path"/opt/kibana/lib"Kibana data directory
elk_kibana_ldap_enabledfalseLDAP authentication toggle
elk_ssl_cert_src"certs/server.crt"SSL certificate source path
elk_ssl_key_src"certs/server.key"SSL private key source path
elk_logstash_data_path"/opt/lib/logstash"Logstash data directory
elk_logstash_jvm_heap_pct0.625JVM heap fraction (62.5%)
elk_logstash_index_pattern"app-logs-%{+YYYY.MM.dd}"ES output index pattern
elk_logstash_syslog_port5514Syslog input port
elk_ilm_general_retention"120d"General index retention
elk_ilm_metrics_retention"26d"Metrics index retention
elk_ilm_monitoring_retention"3d"Monitoring index retention
elk_logstash_hosts(required)Logstash host IPs for Filebeat output
elk_filebeat_logstash_port5044Logstash Beats input port for Filebeat

Important: The elk_elasticsearch_hosts default of ["localhost"] works for single-node setups but must be overridden for a multi-node cluster. Set it to the list of all ES node IPs in group_vars/all.yml (e.g., ["192.168.1.61", "192.168.1.62", "192.168.1.63"]). Without this, Kibana and Logstash will only communicate with the local ES node instead of the full cluster.

Vault Variables

VariablePurposeUsed By
vault_elk_elastic_passwordElasticsearch superuser passwordsvc_elasticsearch, pro_elasticsearch, pro_logstash8
vault_elk_kibana_system_passwordKibana service account passwordsvc_elasticsearch (sets via API), svc_kibana8
vault_elk_logstash_system_passwordLogstash service account passwordsvc_elasticsearch (sets via API)
vault_elk_ldap_bind_dnLDAP bind DNsvc_kibana8 (if LDAP enabled)
vault_elk_ldap_bind_passwordLDAP bind passwordsvc_kibana8 (if LDAP enabled)

What’s Next

You have a working, secured ELK stack on Rocky Linux — Elasticsearch indexing logs across three nodes with transport TLS and built-in authentication, Kibana behind SSL, Logstash ingesting from Beats and syslog, Filebeat shipping system logs, and ILM keeping your disks clean on autopilot. That’s a solid foundation, and it’s designed to be extended.

Here’s where to go from here:

  • Add HTTP-layer TLS to Elasticsearch — transport TLS is deployed, but REST API calls still use plain HTTP. Adding HTTPS on port 9200 encrypts API traffic (requires additional certificates)
  • Create custom Elasticsearch roles — the deployment uses the elastic superuser for Logstash output. Create dedicated roles with minimal permissions for each service
  • Build Kibana dashboards for your most-queried log sources
  • Add Metricbeat for system metrics (CPU, memory, disk) — the metrics ILM templates are already waiting
  • Set up alerting with Kibana’s built-in rules (e.g., disk watermark warnings, cluster health changes)
  • Deploy Filebeat to more hosts — the same svc_filebeat role works on any Rocky Linux host. Add hosts to the filebeat inventory group and re-run

If you followed this guide manually and want to automate future deployments or rebuilds, the companion Ansible playbook bundle covers every step — available at RavenForge Press or directly on Payhip.

Further Reading

Want the automation code? Get the production-ready Ansible playbooks that deploy this entire ELK stack in ~20 minutes.

Get Playbooks — $29