Kubernetes is the most popular container orchestrator currently available. It is already provided as a managed service by most cloud providers like Azure, AWS, GCP etc which shows the adaptability of Kubernetes in much less time.
There are multiple aspects of monitoring Kubernetes cluster and services using ELK and Beats.
For example, using Metricbeat to monitor resource metrics from nodes/pods/containers, Filebeat for system/container logs etc however, in this article we are going to specifically see how to monitor Kubernetes control plane services using Heartbeat.
Kubernetes Control Plane is responsible for coordinating with each node in the cluster, assigning work through pod scheduling, providing administrative interfaces to the cluster, and managing cluster-wide health and services. …
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
Argo CD is implemented as a kubernetes controller which continuously monitors running applications and compares the current, live state against the desired target state (as specified in the Git repo). Argo CD reports & visualizes the differences, while providing facilities to automatically or manually sync the live state back to the desired target state. Any modifications made to the desired target state in the Git repo can be automatically applied and reflected in the specified target environments.
For more details refer — https://argoproj.github.io/argo-cd/
In this article, we will see how we can setup and manage ELK on Kubernetes cluster using Argo CD. We will be using elastic helm charts for setting up cluster with git repo as the single source for Argo CD. …
Alerting lets you take action based on changes in your data. In ELK stack, we can create alerts using Watcher. In our previous post, we discussed all aspects of alerting in ELK.
We saw how we can leverage different channels for alerts like email, slack, webhook etc and set them as action in watcher.
This way we get alerts whenever service is down or metrics has reached certain threshold.
For instance, we are using webhook action to create tickets in JIRA or any other ticketing platform for tracking. Once alert is cleared in watcher we would also want the corresponding ticket in JIRA to be closed automatically. …
We encourage and request upcoming bloggers to contribute original articles related to any technical topic which can help others enhance knowledge and solve issues.
TechManyu.com has been live since Feb 2015 and is growing consistently in content and viewer base . The purpose is to help you to reach wide audience, share your knowledge and contribute back to the community.
Our Medium publication is receiving good response and we are now accepting medium stories to be published on our platform.
Alerting lets you take action based on changes in your data. It is designed around the principle that, if you can query something in Elasticsearch, you can alert on it. Simply define a query, condition, schedule, the actions to take, and Alerting will do the rest.
Till ElasticSearch v7.6, Watcher was the only way to setup alerting in ELK. Starting v7.7 Alerting is integrated with APM, Metrics, SIEM, Uptime, can be centrally managed from the Management UI, and provides a set of built-in actions and alerts for you to use. We will go through both the options.
ELK (Elastic, Logstash and Kibana) is one of the top used logging and monitoring solution as of today primary because its open source and the range of features it provides latest one being Elastic APM (Application Performance Monitoring).
Be it any product that you setup and use in a landscape, performance is something which decides its fate. If you are investing or using a logging and monitoring solution which is slow or unstable then it won’t sustain. That’s why performance benchmarking becomes imperative to know if your solution is setup properly with optimal configurations and results should reflect that.
Who all have used ELK in a big landscape know that if not setup with optimal configurations like shard size, threadpool, jvm, index size, rollover etc then your ElasticSearch stack can turn out to be nightmare to manage and operate. So we need a tool which can do the benchmarking for us and certify how the stack is performing. …
Watcher is an Elasticsearch feature that you can use to create actions based on conditions, which are periodically evaluated using queries on your data. Watches are helpful for analyzing mission-critical and business-critical streaming data. For example, you might watch application logs for performance outages or audit access logs for security threats.
Watcher is provided as part of x-pack license. Details on x-pack settings to enable watcher-
For details on watcher and how to get started with creating alerts-
Now coming back to the problem statement — How to configure watcher alerts for multiple slack channels?
Consider there are 2 slack channels in different workspaces where you want to send the alerts. …
Most common entities that need to be secured in microservices based applications are -
Kibana Spaces are like personas which can make specific features visible or hidden for users.
By creating and configuring Spaces you can have control over which features are visible in each space. For example, you can hide Advanced Settings in “Developer” space or show Index Management only in “Admin” space. You can define which features to show or hide when you add or edit a space. Each space will have separate saved objects like Dashboards, Visualization, Index Patterns etc.
Now, the point to note is that Spaces can only hide or show a feature on UI but it cannot disable or control root level access. So depending on the level of control required, Spaces should be configured with Security Roles so that fine grained access can be defined. …
Microservices architecture comes up with a different set of challenges when it comes to requirements like logging. Monoliths are deployed as a single application which means implementation of logging is simpler. In case of microservices, there can be any number of talking pieces which can be from different technology stacks, different functionality, different hosting platform etc.
We need a unified log format which can collect logs into a central storage like ElasticSearch and make easier for users to query logs in a structured manner. …