In this article, we’ll deploy ECK Operator using helm to the Kubernetes cluster and build a quick-ready solution for logging using Elasticsearch, Kibana, and Filebeat.
Kubernetes Dashboard
What is ECK?
With Elastic Cloud on Kubernetes, we can streamline critical operations, such as:
Managing and monitoring multiple clusters
Scaling cluster capacity and storage
Performing safe configuration changes through rolling upgrades
Securing clusters with TLS certificates
Setting up hot-warm-cold architectures with availability zone awareness
After that we can see that the ECK pod is running:
kubectl get po elastic-operator-0 -n monitoring
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 43s
The pod is up and running
Creating Elasticsearch, Kibana, and Filebeat resources
Elasticsearch
Kibana
Beats (Filebeat/Metricbeat)
APM Server
Elastic Maps
etc
In our case, we’ll use only the first three of them, because we just want to deploy a classical EFK stack.
Let’s deploy the following in the order:
Elasticsearch cluster: This cluster has 3 nodes, each node with 100Gi of persistent storage, and intercommunication with a self-signed TLS-certificate.
# This sample sets up an Elasticsearch cluster with 3 nodes.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch-logging
namespace: monitoring
spec:
version: 8.2.0
nodeSets:
- name: default
config:
# most Elasticsearch configuration parameters are possible to set, e.g: node.attr.attr_name: attr_value
node.roles: ["master", "data", "ingest", "ml"]
# this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: true
podTemplate:
metadata:
labels:
# additional labels for pods
purpose: logging
spec:
containers:
- name: elasticsearch
# specify resource limits and requests
resources:
limits:
memory: 4Gi
cpu: 2
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
count: 3
# request 100Gi of persistent data storage for pods in this topology element
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: gp2
http:
tls:
selfSignedCertificate:
# add a list of SANs into the self-signed HTTP certificate
subjectAltNames:
- dns: elasticsearch-logging-es-http.monitoring.svc.cluster.local
- dns: elasticsearch-logging-es-http.monitoring.svc
- dns: "*.monitoring.svc"
- dns: "*.monitoring.svc.cluster.local"
2. The next one is Kibana: Very simple, just referencing Kibana object to Elasticsearch in a simple way.
2. Running port-forward to Kibana service: Port 5601 is forwarded to localhost
kubectl port-forward svc/kibana-logging-kb-http 5601:5601 -n monitoring
Forwarding from 127.0.0.1:5601 -> 5601
Forwarding from [::1]:5601 -> 5601
Built on the Kubernetes Operator pattern, extends the basic Kubernetes orchestration capabilities to support the setup and management of Elasticsearch, Kibana, APM Server, Enterprise Search, Beats, Elastic Agent, and Elastic Maps Server on Kubernetes.
In this case we use to manage the helm deployments: helmfile.yaml
2. But we can do that just with : Installation using helm
There are a lot of in Elastic Stack, such as:
3. Let’s log in to Kibana with the user elastic and password that we got before (), go to Analytics — Discover section and check logs: