What you’ll accomplish: Install Kibana, put it behind an Apache reverse proxy with SSL, configure the Elasticsearch connection, and verify you can access the dashboard from your browser.
Why Kibana
Kibana is the window into your Elasticsearch data. Without it, you’re writing curl commands against the REST API to search logs — functional, but miserable for day-to-day use. Kibana gives you a web UI with a query bar, time range selectors, index pattern management, and visualization tools.
You might ask: why not Grafana? Grafana is excellent for metrics dashboards, but for log exploration it’s a second-class citizen compared to Kibana. Kibana has native KQL (Kibana Query Language) that maps directly to Elasticsearch queries, built-in index pattern management, and the Discover tab that’s purpose-built for scrolling through log entries. Grafana’s Elasticsearch data source works, but you’ll constantly feel like you’re fighting the interface.
We co-locate Kibana on es01 — the first Elasticsearch node. Kibana is a lightweight Node.js application that uses minimal RAM (~200 MB). It doesn’t need its own host.
Installation
Kibana installs from the same Elastic repository we set up in Chapter 3. If your Kibana host is also an ES node, the repo file is already there. If it’s a separate machine, create the repo and import the GPG key first:
# Only needed if this host doesn't already have the Elastic repo from Chapter 3
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
sudo tee /etc/yum.repos.d/elastic-9.x.repo > /dev/null << 'EOF'
[elastic-9.x]
name=Elastic repository for 9.x packages
baseurl=https://artifacts.elastic.co/packages/9.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
EOF
Install Kibana:
sudo dnf install kibana -y --enablerepo=elastic-9.x
Data Path Relocation
Same pattern as Elasticsearch — we move Kibana’s data to /opt/kibana/lib to keep it off the root partition:
sudo mkdir -p /opt/kibana/lib
sudo chown kibana:kibana /opt/kibana /opt/kibana/lib
sudo chmod 700 /opt/kibana/lib
The playbook handles this on first install and skips it on subsequent runs.
Elasticsearch Connection
Kibana needs to know where Elasticsearch is. The stock kibana.yml has the settings we need commented out.
File modifications reference
| Line to find | Replace with |
|---|---|
#server.name: "your-hostname" | server.name: "kibana" |
#path.data: data | path.data: /opt/kibana/lib |
#elasticsearch.hosts: ["http://localhost:9200"] | elasticsearch.hosts: ["http://192.168.1.61:9200", "http://192.168.1.62:9200", "http://192.168.1.63:9200"] |
Replace the example IPs with your actual Elasticsearch node addresses.
Universal settings (copy-paste)
sudo sed -i 's/^#server.name:.*/server.name: "kibana"/' /etc/kibana/kibana.yml
sudo sed -i 's|^#path.data:.*|path.data: /opt/kibana/lib|' /etc/kibana/kibana.yml
Node-specific settings
sudo vi /etc/kibana/kibana.yml
Find #elasticsearch.hosts: ["http://localhost:9200"] — uncomment and list all your ES node IPs:
elasticsearch.hosts: ["http://192.168.1.61:9200", "http://192.168.1.62:9200", "http://192.168.1.63:9200"]
Listing all three Elasticsearch nodes gives Kibana client-side failover. If es01 is down for maintenance, Kibana automatically routes queries to es02 or es03.
Important: The playbook builds this list from the
elk_elasticsearch_hostsvariable, which defaults to["localhost"]. If you don’t override it ingroup_vars/all.ymlwith your actual ES node IPs, Kibana will only talk to the local node — fine for single-node setups, but wrong for a cluster.
Notice we don’t set server.host here — Kibana defaults to localhost, which is exactly what we want. Since Apache on the same host proxies to Kibana, there’s no reason for Kibana to listen on an external interface. This also means port 5601 is never exposed to the network.
Elasticsearch Authentication
With security enabled (the default), Kibana needs credentials to connect to Elasticsearch. Add these lines to kibana.yml:
elasticsearch.username: "kibana_system"
elasticsearch.password: "YOUR_KIBANA_SYSTEM_PASSWORD"
Replace YOUR_KIBANA_SYSTEM_PASSWORD with the value you set for vault_elk_kibana_system_password. The kibana_system user is a built-in Elasticsearch account with exactly the permissions Kibana needs — don’t use the elastic superuser here.
When using the companion playbook, the Kibana template automatically includes kibana_system credentials from your vault when elk_security_enabled: true.
Apache Reverse Proxy
Why a Reverse Proxy
Kibana listens on port 5601 over plain HTTP. You could expose 5601 directly and call it done, but:
- No HTTPS. Kibana doesn’t handle TLS natively without significant configuration. Apache’s
mod_ssldoes it in 5 lines. - Non-standard port. Users remember
https://kibana.example.com, nothttp://kibana.example.com:5601. - Future flexibility. When you want to add basic auth, rate limiting, or access logs, Apache handles all of that without touching Kibana.
Configuration
Install Apache with SSL and LDAP modules:
sudo dnf install httpd openldap-devel mod_ldap mod_ssl -y
We install mod_ldap and openldap-devel upfront even if you’re not using LDAP authentication — it avoids a second install-and-restart cycle later if you decide to enable it. These packages are small and don’t affect Apache’s behavior unless you explicitly configure LDAP.
Rather than editing the stock ssl.conf (which gets overwritten on mod_ssl updates), we create a dedicated config file for Kibana. This keeps your customizations cleanly separated.
SSL Certificate
For a home lab that lives entirely on your LAN, a self-signed certificate is fine — your browser will complain, but the traffic is still encrypted. If you’re exposing this externally (through Cloudflare Tunnel, a VPN, or port forwarding), use a real certificate from Let’s Encrypt instead.
Option A: Self-signed (LAN only)
# Generate a self-signed certificate
sudo openssl req -x509 -nodes -days 365 \
-newkey rsa:2048 \
-keyout /etc/pki/tls/certs/kibana.example.com.key \
-out /etc/pki/tls/certs/kibana.example.com.crt \
-subj "/CN=kibana.example.com"
# Lock down the private key
sudo chmod 600 /etc/pki/tls/certs/kibana.example.com.key
Replace kibana.example.com with your actual hostname in both the filename and the -subj flag.
Using a real certificate instead? Place your cert and key at the same paths (/etc/pki/tls/certs/kibana.example.com.crt and .key) — or update the paths in the Apache config below. The rest of the guide works the same either way.
The Apache Configuration
Create the Kibana virtual host config. Replace kibana.example.com with your actual hostname:
sudo tee /etc/httpd/conf.d/kibana.conf > /dev/null << 'EOF'
# HTTP — redirect everything to HTTPS
<VirtualHost *:80>
ServerName kibana.example.com
Redirect permanent "/" "https://kibana.example.com/"
</VirtualHost>
# HTTPS — SSL termination + reverse proxy to Kibana
<VirtualHost *:443>
ServerName kibana.example.com
SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/kibana.example.com.crt
SSLCertificateKeyFile /etc/pki/tls/certs/kibana.example.com.key
ProxyPreserveHost On
ProxyPass "/" "http://localhost:5601/"
ProxyPassReverse "/" "http://localhost:5601/"
</VirtualHost>
EOF
This gives you:
- HTTPS with your SSL certificate — Apache handles TLS termination so Kibana doesn’t have to.
- HTTP-to-HTTPS redirect — anyone hitting
http://kibana.example.comgets redirected to HTTPS automatically. - Reverse proxy — all traffic proxied to Kibana on localhost:5601, which never needs to be exposed to the network.
Note: The stock
ssl.confincludes a default<VirtualHost _default_:443>block. It won’t conflict with yourkibana.confas long as theServerNamedirectives differ, but if you want a cleaner setup, you can comment out or remove the default VirtualHost inssl.conf.
SELinux — The One Thing That Breaks Everything
After configuring Apache, you’ll get a 503 Service Unavailable error. Apache is running, Kibana is running, the proxy config looks correct — but Apache can’t connect to port 5601.
The cause: SELinux. By default, SELinux prevents httpd from making outbound network connections. The fix is a single boolean:
sudo setsebool -P httpd_can_network_connect on
The -P makes it persistent across reboots. Without this boolean, Apache can serve static files but can’t proxy to backend services. This is the single most common “why doesn’t my reverse proxy work on Rocky Linux” question, and the official docs never mention it.
The playbook sets this via ansible.posix.seboolean, which is idempotent.
Tip: If you’re ever debugging SELinux denials, check the audit log:
ausearch -m AVC -ts recent. It tells you exactly which process was denied what action, and usually suggests the boolean or policy change needed.
The companion playbook handles SSL, the proxy config, and the SELinux boolean in a single role.
Authentication
Once Kibana and Apache are running (we’ll start them shortly), Elasticsearch’s built-in authentication provides a login screen — users must authenticate with a valid Elasticsearch username and password (such as the elastic superuser account you set up in Chapter 3) before accessing any data.
This gives you two layers of access control: network-level (firewall rules control which hosts can reach Apache) and user-level (Elasticsearch authentication controls who can see data). For most home labs, this is sufficient.
Option A: Elasticsearch Built-in Authentication (Default)
This is what you get with no additional configuration — Kibana prompts for login using Elasticsearch’s built-in users. The elastic superuser can access everything. If you need more granular access, create additional users and roles via the Kibana Security UI or the Elasticsearch API.
Option B: LDAP Authentication (Optional)
If you’re sharing the Kibana instance with a team and have an existing Active Directory or LDAP server, you can have Apache enforce login before proxying to Kibana. This uses the mod_ldap module we installed earlier.
Add the LDAP configuration to the kibana.conf we created. This replaces the file with a version that includes a <Location> authentication block inside the HTTPS VirtualHost:
sudo tee /etc/httpd/conf.d/kibana.conf > /dev/null << 'EOF'
# HTTP — redirect everything to HTTPS
<VirtualHost *:80>
ServerName kibana.example.com
Redirect permanent "/" "https://kibana.example.com/"
</VirtualHost>
# LDAP cache settings — placed outside VirtualHost, applies globally
LDAPSharedCacheSize 500000
LDAPCacheEntries 32
LDAPCacheTTL 6000
LDAPOpCacheEntries 32
LDAPOpCacheTTL 6000
LDAPTrustedMode SSL
LDAPVerifyServerCert Off
# HTTPS — SSL termination + LDAP auth + reverse proxy to Kibana
<VirtualHost *:443>
ServerName kibana.example.com
SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/kibana.example.com.crt
SSLCertificateKeyFile /etc/pki/tls/certs/kibana.example.com.key
<Location />
AuthType Basic
AuthName "Kibana-LDAP-Authentication"
AuthBasicProvider ldap
AuthLDAPURL "ldaps://ldap.example.com:636/dc=example,dc=com?sAMAccountName?sub?(&(objectClass=user)(objectCategory=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))"
AuthLDAPBindDN "cn=svc-kibana,ou=Service Accounts,dc=example,dc=com"
AuthLDAPBindPassword "your-bind-password"
Require valid-user
Require ldap-group CN=ELK Admins,OU=Security Groups,OU=Groups,DC=example,DC=com
</Location>
ProxyPreserveHost On
ProxyPass "/" "http://localhost:5601/"
ProxyPassReverse "/" "http://localhost:5601/"
</VirtualHost>
EOF
Replace these values with your environment’s details:
ServerName— your Kibana hostnameAuthLDAPURL— your LDAP server address and search base. The filter(&(objectClass=user)(objectCategory=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))excludes disabled Active Directory accounts — adjust if you’re using OpenLDAP or a different directory.AuthLDAPBindDNandAuthLDAPBindPassword— a service account that can search your directory. Don’t use a personal account.Require ldap-group— the AD/LDAP group whose members can access Kibana. Remove this line if you want any authenticated user to have access.
After saving, restart Apache:
sudo systemctl restart httpd
You should now see a browser login prompt when accessing Kibana.
If you’re using the companion playbook, set elk_kibana_ldap_enabled: true in group_vars/all.yml to enable this automatically — see svc_kibana8/templates/webkibana.conf.j2 for the full template.
Firewall Rules
Apache needs ports 80 (HTTP redirect) and 443 (HTTPS) open to the network:
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --reload
Port 5601 (Kibana) is intentionally not opened — Kibana listens on localhost only and is accessed through the Apache reverse proxy on 443.
Kibana also needs outbound access to port 9200 on all three Elasticsearch nodes. Outbound connections aren’t blocked by firewalld’s default policy, but the ES nodes’ firewalls must allow your Kibana host’s IP on port 9200 — if you followed the firewall rules in Chapter 3, this is already handled.
Start the Services
If you’re following the manual path, start Apache first (so the reverse proxy is ready), then Kibana:
sudo systemctl enable --now httpd
sudo systemctl enable --now kibana
Start Apache before Kibana — Apache needs to be listening on 443 before Kibana comes up behind it. Kibana takes 15-30 seconds to initialize.
Verification
After the playbook runs (or after starting the services manually), verify Kibana is accessible:
# Check Kibana is listening locally
curl -s http://localhost:5601/api/status | python3 -m json.tool | head -5
# Check Apache HTTPS proxy
curl -sk https://localhost/api/status | python3 -m json.tool | head -5
Expected: a JSON response with Kibana’s version and status. If you get a 200, Kibana is healthy. A 401 means authentication is enabled (LDAP) — that’s also correct.
From your browser, navigate to https://kibana.example.com (or https://<kibana-ip>). You should see the Kibana landing page or login prompt.
Important: If you get a
502or503from Apache, check the SELinux boolean first:getsebool httpd_can_network_connect. If it saysoff, that’s your problem.
What Automation Looks Like
The svc_kibana8 role:
- Checks for existing SSL key — skips cert copy if already present
- Copies SSL certificate and key from
elk_ssl_cert_src/elk_ssl_key_srcto/etc/pki/tls/certs/ - Opens firewall ports 443 and 80
- Installs Apache with LDAP and SSL modules
- Configures SSL via
lineinfile(cert path, key path, ServerName) - Adds proxy block via
blockinfile(ProxyPass to localhost:5601) - Deploys LDAP config if
elk_kibana_ldap_enabledis true - Sets SELinux boolean
httpd_can_network_connect - Installs Kibana (GPG key, repo, dnf)
- Creates data directories on first install
- Deploys
kibana.ymlfrom template with Elasticsearch connection and conditionalkibana_systemcredentials (when security is enabled) - Starts both services — Apache and Kibana, via handlers on config change
Every step is idempotent — re-running the playbook on a host that’s already configured changes nothing.
Verification Checkpoint
Before moving to Chapter 5, confirm:
-
curl -s http://localhost:5601/api/statusreturns a JSON response with Kibana’s version -
curl -sk https://localhost/api/statusreturns a JSON response through Apache -
https://kibana.example.comloads the Kibana UI in your browser (accept the self-signed cert warning) -
getsebool httpd_can_network_connectreturnson -
systemctl status httpd kibanashows both services active -
firewall-cmd --list-allshowshttpsandhttpservices
You have a working Kibana dashboard with SSL. Now let’s set up log ingestion.