Setup Central Instance Infra

Overview

This page provides the step-by-step process for setting up the central-instance infra.

Pre-reads

Pre-requisites

  1. AWS account with admin access to provision EKS Service, you can always subscribe to a free AWS account to learn the basics and try, but there is a limit to what is offered as free, for this demo you need to have a commercial subscription to the EKS service.

  2. Install terraform version (0.14.10) for the Infra-as-code (IaC) to provision cloud resources as code and with desired resource graph and also it helps to destroy the cluster in one go.

  3. Install kubectl on your local machine which helps you interact with the Kubernetes cluster

  4. Install Helm that helps you package the services along with the configurations, environments, secrets, etc into a kubernetes manifests

  5. Install AWS CLI on your local machine so that you can use AWS CLI commands to provision and manage the cloud resources on your account.

  6. Install AWS IAM Authenticator which helps you authenticate your connection from your local machine so that you should be able to deploy DIGIT services.

  7. ​Use the AWS IAM User credentials provided for the Terraform (Infra-as-code) to connect to your AWS account and provision the cloud resources.

    • You'll get a Secret Access Key and Access Key ID. Save them safely.

    • Open the terminal and run the following command. The AWS CLI is already installed and the credentials are saved. (Provide the credentials and you can leave the region and output format blank).

      aws configure --profile central-instance-account 
      
      AWS Access Key ID []:<Your access key>
      AWS Secret Access Key []:<Your secret key>
      Default region name []: ap-south-1
      Default output format []: text

      The above will create the following file In your machine as /Users/.aws/credentials

      [mgramseva-infra-account] 
      aws_access_key_id=*********** 
      aws_secret_access_key=****************************

Before we provision the cloud resources, we need to understand and be sure about what resources need to be provisioned by Terraform to deploy DIGIT. The following picture shows the various key components. (EKS, Worker Nodes, Postgres DB, EBS Volumes, Load Balancer).

The following are the resources that we are going to provision using Terraform in a standard way so that every time and for every environment, it'll have the same infra.

  • EKS Control Plane (Kubernetes Master)

  • Work node group (VMs with the estimated number of vCPUs and memory)

  • Node-Groups

  • EBS Volumes (persistent volumes)

  • RDS (Postgresql)

  • VPCs (private network)

  • Users to access, deploy and read only

Provisioning Central Instance Infra Using Terraform

Fork the DIGIT-DevOps repository into your organization account using the GitHub web portal. Make sure to add the right users to the repository. Clone the forked DIGIT-DevOps repository. Navigate to the sample-central-instance directory which contains the sample AWS infra provisioning script.

git clone --branch release https://github.com/egovernments/DIGIT-DevOps.git

cd DIGIT-DevOps/infra-as-code/terraform/sample-central-instance/remote-state
main.tf
provider "aws" {
  region = "ap-south-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "central-instance-test-terraform-state"     // Replce bucket name

  versioning {
    enabled = true
  }

  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_dynamodb_table" "terraform_state_lock" {
  name           = "central-instance-test-terraform-state"   // Replce bucket name
  read_capacity  = 1
  write_capacity = 1
  hash_key       = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}
cd DIGIT-DevOps/infra-as-code/terraform/sample-central-instance/
main.tf
terraform {
  backend "s3" {
    bucket = "central-instance-test-terraform-state"  //Replace bucket name
    key = "terraform"
    region = "ap-south-1"
  }
}

module "network" {
  source             = "../modules/kubernetes/aws/network"
  vpc_cidr_block     = "${var.vpc_cidr_block}"
  cluster_name       = "${var.cluster_name}"
  availability_zones = "${var.network_availability_zones}"
}

module "db" {
  source                        = "../modules/db/aws"
  subnet_ids                    = "${module.network.private_subnets}"
  vpc_security_group_ids        = ["${module.network.rds_db_sg_id}"]
  availability_zone             = "${element(var.availability_zones, 0)}"
  instance_class                = "db.t3.medium"             //Replace DB instance class according to your environments
  engine_version                = "11.15"                
  storage_type                  = "gp2"
  storage_gb                    = "100"              
  backup_retention_days         = "7"
  administrator_login           = "admin"                   
  administrator_login_password  = "${var.db_password}"
  identifier                    = "${var.cluster_name}-db"
  db_name                       = "${var.db_name}"
  environment                   = "${var.cluster_name}"
}


data "aws_eks_cluster" "cluster" {
  name = "${module.eks.cluster_id}"
}

data "aws_eks_cluster_auth" "cluster" {
  name = "${module.eks.cluster_id}"
}
provider "kubernetes" {
  host                   = "${data.aws_eks_cluster.cluster.endpoint}"
  cluster_ca_certificate = "${base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)}"
  token                  = "${data.aws_eks_cluster_auth.cluster.token}"
  load_config_file       = false
  version                = "~> 1.11"
}

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  version         = "17.24.0"
  cluster_name    = "${var.cluster_name}"
  cluster_version = "${var.kubernetes_version}"
  subnets         = "${concat(module.network.private_subnets, module.network.public_subnets)}"

  tags = "${
    map(
      "kubernetes.io/cluster/${var.cluster_name}", "owned",
      "KubernetesCluster", "${var.cluster_name}"
    )
  }"

  vpc_id = "${module.network.vpc_id}"

  worker_groups_launch_template = [
    {
      name                    = "spot"
      subnets                 = "${concat(slice(module.network.private_subnets, 0, length(var.availability_zones)), slice(module.network.public_subnets, 0, length(var.availability_zones)))}"
      override_instance_types = "${var.override_instance_types}"
      asg_max_size            = 1
      asg_desired_capacity    = 1
      kubelet_extra_args      = "--node-labels=node.kubernetes.io/lifecycle=spot"
      spot_allocation_strategy= "capacity-optimized"
      spot_instance_pools     = null
    },
  ]
  
}

module "es-master" {

  source = "../modules/storage/aws"
  storage_count = 3
  environment = "${var.cluster_name}"
  disk_prefix = "es-master"
  availability_zones = "${var.availability_zones}"
  storage_sku = "gp2"
  disk_size_gb = "2"
  
}
module "es-data-v1" {

  source = "../modules/storage/aws"
  storage_count = 3
  environment = "${var.cluster_name}"
  disk_prefix = "es-data-v1"
  availability_zones = "${var.availability_zones}"
  storage_sku = "gp2"
  disk_size_gb = "25"
  
}

module "zookeeper" {

  source = "../modules/storage/aws"
  storage_count = 3
  environment = "${var.cluster_name}"
  disk_prefix = "zookeeper"
  availability_zones = "${var.availability_zones}"
  storage_sku = "gp2"
  disk_size_gb = "2"
  
}

module "kafka" {

  source = "../modules/storage/aws"
  storage_count = 3
  environment = "${var.cluster_name}"
  disk_prefix = "kafka"
  availability_zones = "${var.availability_zones}"
  storage_sku = "gp2"
  disk_size_gb = "50"
  
}

data "aws_security_group" "node_sg" {
 tags = {
    Name = "${var.cluster_name}-eks_worker_sg"
  }
  depends_on = [
   module.eks
  ]
}
  
module "node-group" {  
  for_each = toset(["digit", "urban", "sanitation", "ifix", "mgramseva"])  // Replace/Add node groups
  source = "../modules/node-pool/aws"

  cluster_name        = "${var.cluster_name}"
  node_group_name     = "${each.key}-ng"
  kubernetes_version  = "${var.kubernetes_version}"
  security_groups     =  ["${module.network.worker_nodes_sg_id}", "${data.aws_security_group.node_sg.id}"]
  subnet              = "${concat(slice(module.network.private_subnets, 0, length(var.node_pool_zone)))}"
  node_group_max_size = 1
  node_group_desired_size = 1
  depends_on = [
    module.network,
    module.eks
  ]
}  


variables.tf
#
# Variables Configuration
#

variable "cluster_name" {
  description = "Name of the Kubernetes cluster"
  default = "cental-instance-test"  //Replace
}

variable "vpc_cidr_block" {
  default = "192.172.32.0/19"
}

variable "network_availability_zones" {
  description = "Configure availability zones configuration for VPC. Leave as default for India. Recommendation is to have subnets in at least two availability zones"
  default = ["ap-south-1a", "ap-south-1b"]  // Replace if needed 
}

variable "availability_zones" {
  description = "Amazon EKS runs and scales the Kubernetes control plane across multiple AWS Availability Zones to ensure high availability. Specify a comma separated list to have a cluster spanning multiple zones. Note that this will have cost implications"
  default = ["ap-south-1a"]  #REPLACE IF NEEDED
}

variable "node_pool_zone" {
 description = "Should be same as availability_zones"
 default = ["ap-south-1a"] #REPLACE IF NEEDED
}

variable "kubernetes_version" {
  default = "1.20"  #REPLACE IF NEEDED
}

variable "instance_type" {
  default = "m4.xlarge"
}

variable "override_instance_types" {
  default = ["r5a.large", "r5ad.large", "r5d.large", "m4.xlarge"]
  
}

variable "number_of_worker_nodes" {
  default = "1"
}

variable "ssh_key_name" {
  default = "central-instance"
}

variable "db_name" {
  description = "RDS DB name. Make sure there are no hyphens or other special characters in the DB name. Else, DB creation will fail"
  default = "test_db" #REPLACE
}

variable "db_password" {}
terraform init
terraform plan
terraform apply

Last updated

All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.