eGov Logging Setup

This tutorial will walk you through How to Setup Logging in eGov


Know about fluent-bit Know about es-curator


  • DIGIT uses golang (required v1.13.3) automated scripts to deploy the builds onto Kubernetes - Linux or Windows or Mac

  • kubectl is a CLI to connect to the kubernetes cluster from your machine

  • Install Visualstudio IDE Code for better code/configuration editing capabilities

  • Git

Logging Architecture:

Logging Deployment Steps:

  1. Clone the following DIGIT-DevOps repo (If not already done as part of Infra setup), you may need to install git and then run git clone it to your machine.

    git clone -b release

  2. Implement the kafka-v2-infra and elastic search infra setup into the existing cluster

  3. Deploy the fluent-bit, kafka-connect-infra, and es-curator into your cluster, either using Jenkins deployment Jobs or go lang deployer

    go run main.go deploy -e <environment_name> 'fluent-bit,kafka-connect-infra,es-curator'

  4. Create Elasticsearch Service Sink Connector. You can run the below command in playground pods, make sure curl is installed before running any curl commands

    1. Delete the Kafka infra sink connector if already exists with the Kafka connection, using the below command

      1. Use the below command to check Kafka infra sink connector

        curl http://kafka-connect-infra.kafka-cluster:8083/connectors/

      2. To delete the connector

        curl -X DELETE http://kafka-connect-infra.kafka-cluster:8083/connectors/egov-services-logs-to-es

    2. The Kafka Connect Elasticsearch Service Sink connector moves data from Kafka-v2-infra to Elasticsearch infra. It writes data from a topic in Kafka-v2-infra to an index in Elasticsearch infra.

      curl -X POST http://kafka-connect-infra.kafka-cluster:8083/connectors/ -H 'Content-Type: application/json' -H 'Cookie: SESSIONID=f1349448-761e-4ebc-a8bb-f6799e756185' -H 'Postman-Token: adabf0e8-0599-4ac9-a591-920586ff4d50' -H 'cache-control: no-cache' -d '{ "name": "egov-services-logs-to-es", "config": { "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", "connection.url": "", "": "general", "topics": "egov-services-logs", "key.ignore": "true", "schema.ignore": true, "value.converter.schemas.enable": false, "key.converter": "", "value.converter": "org.apache.kafka.connect.json.JsonConverter", "transforms": "TopicNameRouter", "transforms.TopicNameRouter.type": "org.apache.kafka.connect.transforms.RegexRouter", "transforms.TopicNameRouter.regex": ".*", "transforms.TopicNameRouter.replacement": "egov-services-logs", "batch.size": 50, "max.buffered.records": 500, "": 600000, "": 5000, "": 10000, "": 100, "": 2, "errors.log.enable": true, "": "egov-services-logs-to-es-failed", "tasks.max": 1 } }'

    3. You can verify sink Connector by using the below command

      curl http://kafka-connect-infra.kafka-cluster:8083/connectors/

  5. Deploy the kibana-infra to query the elasticseach infra egov-services-logs indexes data.

    go run main.go deploy -e <environment_name> 'kibana-infra'

  6. You can access the logging to https://<sub-domain_name>/kibana-infra


  1. If the data is not receiving to elasticsearch infra's egov-services-logs index from kafka-v2-infra topic egov-services-logs.

    1. Ensure that the elasticsearch sink connector is available, use the below command to check

      curl http://kafka-connect-infra.kafka-cluster:8083/connectors/

    2. Also, make sure kafka-connect-infra is running without errors

      kubectl logs -f deployments/kafka-connect-infra -n kafka-cluster

    3. Ensure elasticsearch infra is running without errors

  2. In the event that none of the above services are having issues, take a look at the fluent-bit logs and restart it if necessary.

Last updated

All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.