← Deploying the ELK Stack the Right Way

Chapter 3

Elasticsearch

In this chapter
<nav id="TableOfContents" aria-label="Chapter sections"> <ul> <li><a href="#why-elasticsearch">Why Elasticsearch</a></li> <li><a href="#installation">Installation</a> <ul> <li><a href="#repository-setup">Repository Setup</a></li> <li><a href="#the-ansible-approach">The Ansible Approach</a></li> </ul> </li> <li><a href="#data-path-relocation">Data Path Relocation</a> <ul> <li><a href="#the-problem">The Problem</a></li> <li><a href="#the-fix">The Fix</a></li> </ul> </li> <li><a href="#es-9x-auto-configuration-cleanup">ES 9.x Auto-Configuration Cleanup</a></li> <li><a href="#cluster-configuration">Cluster Configuration</a> <ul> <li><a href="#the-configuration-file">The Configuration File</a></li> <li><a href="#clustername"><code>cluster.name</code></a></li> <li><a href="#nodename"><code>node.name</code></a></li> <li><a href="#networkhost-vs-transporthost"><code>network.host</code> vs <code>transport.host</code></a></li> <li><a href="#discoveryseed_hosts-and-clusterinitial_master_nodes"><code>discovery.seed_hosts</code> and <code>cluster.initial_master_nodes</code></a></li> <li><a href="#xpacksecurityenabled-true"><code>xpack.security.enabled: true</code></a></li> </ul> </li> <li><a href="#swappiness">Swappiness</a></li> <li><a href="#virtual-memory-maps">Virtual Memory Maps</a></li> <li><a href="#jvm-heap-sizing">JVM Heap Sizing</a></li> <li><a href="#firewall-rules">Firewall Rules</a></li> <li><a href="#start-the-service">Start the Service</a></li> <li><a href="#built-in-user-password-setup">Built-in User Password Setup</a></li> <li><a href="#verification">Verification</a></li> <li><a href="#what-automation-looks-like">What Automation Looks Like</a></li> <li><a href="#verification-checkpoint">Verification Checkpoint</a></li> </ul> </nav>

What you’ll accomplish: Install Elasticsearch on all three nodes, configure cluster formation, relocate the data path, tune the system for search workloads, and verify you have a healthy 3-node cluster.

Important: Every step in this chapter is performed on all three nodes. The only differences between nodes are node.name and network.host, which use that node’s specific name and IP. Everything else — repo setup, installation, data path relocation, cluster config, sysctl tuning, firewall rules, and service start — is identical across all three.

Why Elasticsearch

Every log line that Logstash ingests ends up in an Elasticsearch index. Every query you run in Kibana hits the Elasticsearch REST API. It’s the storage and search engine at the center of the stack — where your data lives.

We’re using Elastic’s official distribution (not OpenSearch, the AWS fork). The Elastic distribution includes ILM (Index Lifecycle Management) and the full REST API without license restrictions for the features we need. OpenSearch is a fine project, but the Elastic ecosystem — particularly the Beats agents and Kibana — works best with the official distribution.

We pin to the 9.x release series. Major version upgrades (8.x → 9.x) require cluster-wide coordination; minor updates within 9.x are drop-in replacements.

Installation

Repository Setup

Elasticsearch isn’t in Rocky Linux’s default repos. You need to add Elastic’s official YUM repository and import their GPG signing key.

Here’s what needs to happen:

# Import the Elasticsearch GPG key
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
sudo tee /etc/yum.repos.d/elastic-9.x.repo > /dev/null << 'EOF'
[elastic-9.x]
name=Elastic repository for 9.x packages
baseurl=https://artifacts.elastic.co/packages/9.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
EOF

This is the same repository that Elasticsearch, Kibana, and Logstash all install from — one repo file covers all three. We use enabled=0 so dnf update doesn’t accidentally upgrade any Elastic component during routine system updates. Instead, we explicitly enable the repo during each install:

sudo dnf install elasticsearch -y --enablerepo=elastic-9.x

This is intentional. Elasticsearch version upgrades need coordination across the cluster. You don’t want an unattended dnf update to upgrade one node and leave the others behind. If Kibana or Logstash runs on this same host, you won’t need to create the repo file again — it’s already here.

Immediately after install, stop and disable the service:

sudo systemctl stop elasticsearch
sudo systemctl disable elasticsearch

ES 9.x may auto-start on install with auto-generated configuration that conflicts with what we’re about to set up. Stopping it now gives us a clean slate — we’ll configure everything first, then start it explicitly at the end of this chapter.

The Ansible Approach

The playbook uses ansible.builtin.rpm_key instead of shell: rpm --import — it’s idempotent (safe to run repeatedly) and doesn’t shell out. The dnf module with state: present is also idempotent: if Elasticsearch is already installed, it does nothing.

Data Path Relocation

The Problem

Elasticsearch stores its data in /var/lib/elasticsearch by default. On a typical home lab VM, /var/lib is on the root partition — the same 20-40 GB disk that holds your OS, packages, and logs. A few weeks of log ingestion fills that partition, and your node crashes with a disk full error. Worse, Elasticsearch marks itself read-only when disk usage exceeds 95%, and recovery requires manual intervention.

The Fix

We relocate the data path to /opt/lib/elasticsearch and create a symlink back so Elasticsearch doesn’t know the difference:

Warning: The next command removes the default data directory. Only do this on a fresh Elasticsearch installation with no existing data. If you’re migrating an existing cluster, back up your data first — this rm -rf will destroy everything in that directory.

# Remove the default data directory (empty on fresh install)
sudo rm -rf /var/lib/elasticsearch

# Create the new data directory with correct ownership
sudo mkdir -p /opt/lib/elasticsearch
sudo chown elasticsearch:elasticsearch /opt/lib/elasticsearch
sudo chmod 700 /opt/lib/elasticsearch

# Symlink so Elasticsearch's default path still works
sudo ln -s /opt/lib/elasticsearch /var/lib/elasticsearch

The playbook only does this on first install (when /opt/lib/elasticsearch doesn’t exist yet). On subsequent runs, the directory already exists and these steps are skipped.

Tip: If your VM has a dedicated data disk mounted at /opt or /data, you get disk isolation for free. Elasticsearch data growth can’t fill your root partition.

ES 9.x Auto-Configuration Cleanup

Do this before editing elasticsearch.yml.

Elasticsearch 9.x auto-configures security during installation. Open elasticsearch.yml and scroll to the bottom — you’ll see a block between BEGIN SECURITY AUTO CONFIGURATION and END SECURITY AUTO CONFIGURATION markers. This block contains the installer’s own xpack.security.enabled: true, TLS cert paths, HTTP SSL settings, and a cluster.initial_master_nodes entry for just the local node.

The problem: YAML doesn’t allow duplicate keys. When we add our own xpack.security.enabled, cluster.initial_master_nodes, and TLS settings later in this chapter, Elasticsearch will see two conflicting values for the same settings and fail to start.

Rather than deleting the block entirely, comment it out. The settings in it — particularly the HTTP SSL configuration and the auto-generated cert paths — may be useful reference later if you decide to enable HTTPS on the REST API or customize your TLS setup beyond what this guide covers.

sudo sed -i '/^#----------------------- BEGIN SECURITY AUTO CONFIGURATION/,/^#----------------------- END SECURITY AUTO CONFIGURATION/ s/^/#/' /etc/elasticsearch/elasticsearch.yml

This comments out every line in the block (including the marker lines) without removing anything from the file. You can always uncomment individual settings later if you need them.

The installer also adds SSL keystore entries that conflict with the ones we’ll create later. Remove them now — unlike the config file settings, keystore entries can’t be commented out:

sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.transport.ssl.keystore.secure_password 2>/dev/null || true
sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.transport.ssl.truststore.secure_password 2>/dev/null || true
sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.http.ssl.keystore.secure_password 2>/dev/null || true

Repeat both steps on all three nodes before proceeding.

Cluster Configuration

This is where the official docs start to fall short. Configuring a single Elasticsearch node is straightforward. Configuring three nodes that actually find each other and form a cluster requires understanding discovery.

The Configuration File

The stock elasticsearch.yml is mostly comments with a handful of defaults. Here’s every line you need to change — use the reference table if you prefer to edit manually, or follow the copy-paste commands below.

File modifications reference

Line to findReplace with
#cluster.name: my-applicationcluster.name: homelab-prod
#node.name: node-1node.name: es01
path.data: /var/lib/elasticsearchpath.data: /opt/lib/elasticsearch
#network.host: 192.168.0.1network.host: ["_local_", "192.168.1.61"]
#http.port: 9200http.port: 9200
#discovery.seed_hosts: ["host1", "host2"]discovery.seed_hosts: ["192.168.1.61", "192.168.1.62", "192.168.1.63"]
#cluster.initial_master_nodes: ["node-1", "node-2"]cluster.initial_master_nodes: ["es01", "es02", "es03"]
(add to end of file)transport.host: 0.0.0.0
(add to end of file)xpack.security.enabled: true

Replace the example IPs and node names with your actual values. Repeat on each node with that node’s name and IP.

Universal settings (copy-paste)

These settings are the same on every node:

sudo sed -i 's/^#cluster.name:.*/cluster.name: homelab-prod/' /etc/elasticsearch/elasticsearch.yml
sudo sed -i 's|^path.data:.*|path.data: /opt/lib/elasticsearch|' /etc/elasticsearch/elasticsearch.yml
sudo sed -i 's/^#http.port:.*/http.port: 9200/' /etc/elasticsearch/elasticsearch.yml
echo 'transport.host: 0.0.0.0' | sudo tee -a /etc/elasticsearch/elasticsearch.yml > /dev/null
echo 'xpack.security.enabled: true' | sudo tee -a /etc/elasticsearch/elasticsearch.yml > /dev/null

Node-specific settings

Open the file and replace these four lines. The values below are for es01 — adjust for each node.

sudo vi /etc/elasticsearch/elasticsearch.yml

Find #node.name: node-1 — uncomment and set to your node’s short hostname:

node.name: es01

Find #network.host: 192.168.0.1 — uncomment and set to your node’s IP:

network.host: ["_local_", "192.168.1.61"]

Find #discovery.seed_hosts: ["host1", "host2"] — uncomment and list all node IPs:

discovery.seed_hosts: ["192.168.1.61", "192.168.1.62", "192.168.1.63"]

Find #cluster.initial_master_nodes: ["node-1", "node-2"] — uncomment and list all node names (must match node.name exactly):

cluster.initial_master_nodes: ["es01", "es02", "es03"]

Note: cluster.initial_master_nodes must exactly match each node’s node.name value — these are name lookups, not network addresses. discovery.seed_hosts handles the actual network discovery using IPs.

Let’s walk through each setting.

cluster.name

All nodes with the same cluster.name will attempt to join each other. We append the environment (prod, staging) so you can run separate clusters on the same network without them merging. If you have a test cluster and a production cluster, different cluster names keep them isolated.

node.name

Defaults to the hostname. We set it explicitly to inventory_hostname (the name in your Ansible inventory) so cluster health output shows recognizable names instead of auto-generated IDs.

network.host vs transport.host

  • network.host is a list of addresses this node listens on for the REST API (port 9200). We use ["_local_", "192.168.1.61"] — the _local_ special value binds to the loopback interface so curl http://localhost:9200 works for health checks and ILM API calls. The second entry is the node’s actual IP for cluster and client traffic. Without _local_, you’d have to use the node’s IP in every curl command.
  • transport.host is for the internal cluster transport protocol (port 9300). We set this to 0.0.0.0 because discovery traffic may come in on any interface, and binding to a specific IP can cause discovery failures in some network configurations.

discovery.seed_hosts and cluster.initial_master_nodes

These two settings control cluster formation:

  • discovery.seed_hosts — the list of nodes to contact when trying to join a cluster. Every node should list all other nodes. On startup, Elasticsearch pings each seed host on port 9300 looking for a cluster to join. The playbook resolves inventory hostnames to IP addresses to avoid DNS dependency during cluster bootstrap — use IPs here, not hostnames.
  • cluster.initial_master_nodes — required for bootstrapping a brand-new cluster. Lists the nodes eligible to be elected as the first master. After the cluster forms, this setting is effectively ignored (the cluster remembers its own configuration). These entries must exactly match each node’s node.name — which is the inventory_hostname from your Ansible inventory.

What goes wrong: If you list only 2 of 3 nodes in seed_hosts, the missing node may form its own single-node cluster instead of joining yours. Always list all nodes. If initial_master_nodes doesn’t match the actual node.name values, bootstrap fails silently — the nodes wait forever for a master that never arrives.

xpack.security.enabled: true

Security is enabled from the start. There’s no “deploy without security first, add it later” step — that approach leads to clusters running unprotected for months because “later” never comes. We configure transport TLS and built-in user authentication as part of the initial deployment.

With security enabled, the deployment configures:

  • Transport TLS — encrypted inter-node communication on port 9300, using PKCS#12 certificates with verification_mode: certificate
  • Built-in user authentication — the elastic superuser, kibana_system, and logstash_system passwords are set automatically from vault variables
  • Keystore management — auto-generated keystore entries from the ES 9.x installer are removed (ES 9.3+ treats even empty-password entries as “password provided,” which breaks password-less PKCS12 certs)

We’ll set up TLS certificates first, then start the cluster, then configure passwords. Once passwords are set, every curl command against the REST API will require -u elastic:YOUR_PASSWORD — you’ll see this in the Verification section later in this chapter.

TLS Certificate Setup (Manual Path)

If you’re following the manual path, you need to generate and distribute TLS certificates before starting the cluster. The playbook automates all of this, but here’s what it does and how to do it by hand.

Step 1: Generate a Certificate Authority (CA) on the first node:

sudo /usr/share/elasticsearch/bin/elasticsearch-certutil ca \
  --out /etc/elasticsearch/certs/elastic-stack-ca.p12 \
  --pass "" --days 3650

This creates a PKCS#12 CA certificate valid for 10 years. The empty password (--pass "") keeps things simple for a home lab — the cert files are protected by filesystem permissions instead.

Step 2: Generate a node certificate signed by that CA:

sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert \
  --ca /etc/elasticsearch/certs/elastic-stack-ca.p12 \
  --ca-pass "" --days 3650 \
  --out /etc/elasticsearch/certs/elastic-certificates.p12 \
  --pass ""

Step 3: Copy both files to every Elasticsearch node:

# Create the certs directory on each node
sudo mkdir -p /etc/elasticsearch/certs

# Copy from the first node to each other node
sudo scp /etc/elasticsearch/certs/elastic-stack-ca.p12 es02:/etc/elasticsearch/certs/
sudo scp /etc/elasticsearch/certs/elastic-certificates.p12 es02:/etc/elasticsearch/certs/
# Repeat for es03

Set ownership and permissions on every node:

sudo chown root:elasticsearch /etc/elasticsearch/certs/*.p12
sudo chmod 660 /etc/elasticsearch/certs/*.p12

Step 4: Verify transport SSL keystore entries are removed on every node. The cleanup step earlier removed the auto-generated entries. Confirm they’re gone — ES 9.3+ treats any keystore password entry (even an empty one) as “a password was provided,” which breaks certs generated with --pass "":

sudo /usr/share/elasticsearch/bin/elasticsearch-keystore list | grep -c 'transport.ssl' && echo "WARNING: transport SSL keystore entries still present — remove them" || echo "OK: no transport SSL keystore entries"

If any remain, remove them:

sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.transport.ssl.keystore.secure_password 2>/dev/null || true
sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.transport.ssl.truststore.secure_password 2>/dev/null || true

Since the PKCS12 certs have no password (--pass ""), ES opens them directly without needing keystore password entries.

Step 5: Add transport TLS settings to elasticsearch.yml on every node:

cat << 'EOF' | sudo tee -a /etc/elasticsearch/elasticsearch.yml > /dev/null

# Transport TLS — secures inter-node cluster communication
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
EOF

Certificate expiry: The --days 3650 flag gives you 10 years. Mark your calendar — when these certs expire, inter-node communication silently fails and your cluster won’t form. Regenerate and redistribute before that happens.

The companion playbook automates all of the above (CA generation, cert distribution, keystore management, and password setup) if you prefer not to do this by hand.

Swappiness

Elasticsearch and virtual memory swapping are enemies. When the JVM’s heap gets swapped to disk, garbage collection pauses go from milliseconds to seconds, and your cluster becomes unresponsive.

The playbook sets vm.swappiness=0 via sysctl:

sudo tee /etc/sysctl.d/02-swappiness.conf > /dev/null << 'EOF'
vm.swappiness = 0
EOF

sudo sysctl -p /etc/sysctl.d/02-swappiness.conf

This tells the kernel to avoid swapping unless the system is critically low on memory. It’s not the same as disabling swap entirely — the swap partition still exists as a safety net — but the kernel won’t proactively move JVM pages to disk.

Virtual Memory Maps

Elasticsearch uses memory-mapped files (mmapfs) for its Lucene indices. The Linux default vm.max_map_count of 65530 is too low — Elasticsearch needs at least 262144 and will fail to start without it.

The playbook sets this via sysctl:

sudo tee /etc/sysctl.d/02-elasticsearch.conf > /dev/null << 'EOF'
vm.max_map_count = 262144
EOF

sudo sysctl -p /etc/sysctl.d/02-elasticsearch.conf

If you’re setting up manually and skip this step, Elasticsearch will crash on startup with an error about max virtual memory areas vm.max_map_count [65530] is too low. This is the second most common “why won’t ES start” issue after JVM heap misconfiguration.

JVM Heap Sizing

Chapter 2 mentioned that Elasticsearch uses roughly 50% of available RAM for the JVM heap. That’s the general recommendation — but the ES 9.x package defaults to a 1 GB heap, which is adequate for a home lab with light log ingestion. Unless you’re indexing thousands of events per second or running expensive aggregations, you probably don’t need to touch this.

If you do want to change it, the configuration file is /etc/elasticsearch/jvm.options. The key settings are -Xms (initial heap) and -Xmx (maximum heap). Set them to the same value to avoid runtime resizing:

# /etc/elasticsearch/jvm.options (only change if needed)
-Xms2g
-Xmx2g

If you need to change heap size, edit the file:

sudo vi /etc/elasticsearch/jvm.options

The critical rule: never set the JVM heap higher than half your total RAM. Elasticsearch needs the other half for the operating system’s filesystem cache, which Lucene relies on heavily for search performance. A node with 4 GB of RAM should have at most a 2 GB heap. Going higher actually hurts performance because you’re starving the filesystem cache.

Unlike Logstash (where Chapter 5 covers allocating 62.5% of RAM to the pipeline), Elasticsearch’s heap default is conservative and usually fine as-is. The playbook does not manage ES JVM heap — it’s one of the few settings left at the package default, because the right value depends entirely on your hardware and workload.

Firewall Rules

Elasticsearch needs two ports open, but only to specific hosts:

  • Port 9200 (REST API) — Kibana, Logstash, and other ES nodes need this. Optionally, monitoring tools like CheckMK.
  • Port 9300 (cluster transport) — only other ES nodes, Kibana, and Logstash need this.

The playbook uses firewalld rich rules with source address restrictions instead of opening ports to the entire network. This is more secure than firewall-cmd --add-port=9200/tcp, which opens the port to any source. In a home lab, it’s probably not critical, but it’s good practice and the playbook does it automatically.

On each ES node, allow every host that needs to talk to Elasticsearch — the other ES nodes, the Kibana host, and the Logstash host. Replace the IPs below with your actual IPs:

# Allow all three ES nodes on ports 9200 and 9300
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.61" port port="9200" protocol="tcp" accept'
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.62" port port="9200" protocol="tcp" accept'
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.63" port port="9200" protocol="tcp" accept'

sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.61" port port="9300" protocol="tcp" accept'
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.62" port port="9300" protocol="tcp" accept'
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.63" port port="9300" protocol="tcp" accept'

# Allow Kibana host on port 9200 (REST API queries)
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.64" port port="9200" protocol="tcp" accept'

# Allow Logstash host on port 9200 (index writes)
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.65" port port="9200" protocol="tcp" accept'

# Reload to apply
sudo firewall-cmd --reload

Repeat these commands on all three ES nodes. If your Kibana or Logstash host is co-located on one of the ES nodes, some of these rules overlap with the ES node rules above — that’s fine, adding the same rule twice has no effect.

Start the Service

If you’re following the manual path, start Elasticsearch on each node:

sudo systemctl enable --now elasticsearch

Repeat this on all three nodes. Elasticsearch takes about 30 seconds to start — wait for it to finish before proceeding. You can watch startup progress with sudo journalctl -u elasticsearch -f.

Built-in User Password Setup

With the cluster running and security enabled, the elastic superuser has a randomly generated password from the installer. Reset it to a password you control:

sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic -b -i

Enter your chosen password twice when prompted. If this command fails with “Failed to determine the health of the cluster,” the cluster isn’t fully formed yet — wait 30-60 seconds and try again. All three nodes must be running and able to reach each other on port 9300.

Now set the kibana_system and logstash_system passwords via the API. These are service account passwords that Kibana and Logstash will use to authenticate against Elasticsearch:

curl -X POST -u elastic:YOUR_PASSWORD \
  -H "Content-Type: application/json" \
  http://localhost:9200/_security/user/kibana_system/_password \
  -d '{"password": "YOUR_KIBANA_SYSTEM_PASSWORD"}'

curl -X POST -u elastic:YOUR_PASSWORD \
  -H "Content-Type: application/json" \
  http://localhost:9200/_security/user/logstash_system/_password \
  -d '{"password": "YOUR_LOGSTASH_SYSTEM_PASSWORD"}'

Replace YOUR_PASSWORD with the elastic password you just set, and choose strong passwords for the other two accounts. Keep all three passwords somewhere safe — you’ll need them in the Kibana and Logstash chapters.

Verification

Now verify the cluster formed correctly:

curl -u elastic:YOUR_PASSWORD http://localhost:9200/_cluster/health?pretty

Expected output:

{
  "cluster_name" : "homelab-prod",
  "status" : "green",
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "number_of_pending_tasks" : 0
}

What matters:

  • status: green means all primary and replica shards are allocated. yellow means primaries are allocated but some replicas aren’t (normal for a fresh cluster with no indices). red means data is missing — stop and debug.
  • number_of_nodes: 3 confirms all three nodes found each other.

If you see number_of_nodes: 1, discovery failed. Check firewall rules on port 9300, verify discovery.seed_hosts lists all three nodes, and check /var/log/elasticsearch/ for errors.

You can also check each node individually:

# Node info
curl -u elastic:YOUR_PASSWORD http://localhost:9200/_cat/nodes?v

# Expected output (3 rows, one per node):
# ip            heap.percent ram.percent cpu load_1m node.name
# 192.168.1.61            15          65   2    0.10 es01.example.com
# 192.168.1.62            12          58   1    0.05 es02.example.com
# 192.168.1.63            14          62   1    0.08 es03.example.com

Important: Do not proceed to Chapter 4 until you see 3 nodes and a green or yellow cluster status. Debugging Kibana and Logstash problems on top of a broken cluster is miserable.

What Automation Looks Like

If you’re using the Ansible playbooks instead of (or alongside) manual setup, here’s what the svc_elasticsearch role does:

  1. Imports the GPG key and copies the Elastic repo file
  2. Installs Elasticsearch via dnf with state: present (safe to run repeatedly)
  3. Stops ES before configuration (ES 9.x auto-starts on install with auto-generated config)
  4. Relocates the data path to /opt/lib/elasticsearch with a symlink — only on first install
  5. Wipes auto-configured SSL keystore entries (ES 9.x generates these on install; they conflict with managed security config)
  6. Generates TLS certificates when elk_security_enabled: true — creates a CA, generates node certs, fetches them to the Ansible controller, and distributes to all nodes
  7. Removes transport SSL password entries from the keystore (ES 9.3+ requires this for password-less PKCS12 certs)
  8. Deploys elasticsearch.yml from a Jinja2 template with cluster settings and conditional transport TLS config
  9. Sets vm.swappiness=0 and vm.max_map_count=262144 via ansible.posix.sysctl
  10. Opens firewall ports 9200 and 9300 with source-restricted rich rules
  11. Starts and enables the elasticsearch service
  12. Resets the elastic superuser password and sets kibana_system and logstash_system passwords via the REST API (security mode only, runs once across the cluster)

The pro_elasticsearch role then:

  1. Creates ILM policies via the Elasticsearch REST API (covered in Chapter 7)
  2. Creates index templates that apply ILM policies to new indices
  3. Applies ILM policies to any existing indices matching known patterns
  4. Updates the MOTD with Elasticsearch service information

On second run, every task reports ok (not changed) — the playbook is fully idempotent. Configuration changes trigger a handler that restarts Elasticsearch only when the config template actually changes.

Verification Checkpoint

Before moving to Chapter 4, confirm:

  • curl -u elastic:YOUR_PASSWORD http://localhost:9200/_cluster/health?pretty shows "status": "green" (or "yellow" on a fresh cluster)
  • "number_of_nodes": 3 — all three nodes found each other
  • curl -u elastic:YOUR_PASSWORD http://localhost:9200/_cat/nodes?v lists all three node names
  • sysctl vm.swappiness returns 0 on all nodes
  • sysctl vm.max_map_count returns 262144 on all nodes
  • firewall-cmd --list-all shows rich rules for ports 9200 and 9300

Your Elasticsearch cluster is healthy. Now let’s put a UI in front of it.

Want the automation code? Get the production-ready Ansible playbooks that deploy this entire ELK stack in ~20 minutes.

Get Playbooks — $29