0

I am using EKS terraform module (https://github.com/terraform-aws-modules/terraform-aws-eks) to create EKS cluster and it is working fine. I have written shell script to harden EKS worker and I want to use it during worker node bootstrapping. I have tried "pre_bootstrap_user_data" option available in EKS module but worker nodes(s) failed to join cluster.

    platform = {
      ami_id                     = "ami-xxxxxxxxxxxx"
      enable_bootstrap_user_data = true
      pre_bootstrap_user_data = <<-EOT
        #!/bin/bash
        echo "tmpfs /tmp tmpfs defaults,rw,nosuid,nodev,noexec,relatime 0 0" >> /etc/fstab
        mount -a
        yum remove cronie -y
        echo "umask 027" >> /etc/bashrc
        echo "umask 027" >> /etc/profile
        echo "umask 027" >> /etc/profile.d/which2.sh
        echo "umask 027" >> /etc/profile.d/less.sh
        echo "umask 027" >> /etc/profile.d/lang.sh
        echo "umask 027" >> /etc/profile.d/colorls.sh
        echo "umask 027" >> /etc/profile.d/colorgrep.sh
        echo "umask 027" >> /etc/profile.d/256term.sh
        systemctl disable nfs
        systemctl stop nfs
        systemctl mask nfs
        systemctl disable nfs-server
        systemctl stop nfs-server
        systemctl mask nfs-server
        systemctl disable rpcbind
        systemctl stop rpcbind
        systemctl mask rpcbind
      EOT
      instance_types             = ["t2.micro"]
      min_size                   = 1
      max_size                   = 1
      desired_size               = 1
      capacity_type              = "ON_DEMAND"
      labels = {
        env  = "Dev"
      }
      block_device_mappings = {
        xvda = {
          device_name = "/dev/xvda"
          ebs = {
            name                  = "data"
            volume_size           = 200
            volume_type           = "gp3"
            tags = {
              "Environment"  = "Dev"
            }
          }
        }
      }
      tags = {
        "Environment"  = "Dev"
      }
    }```


It is executing first couple of lines only, and kubelet will be in stopped state when we check and nodegroup failed to join cluster error at last. I have tried MIME multi-part file also,

```MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash
echo "my hardening script"
...
...
--==MYBOUNDARY==--```

same issue, it is not working as expected. I have tried using custom "tpl" file (templates/linux_user_data.tpl in Module tree.) but no luck.



I have tried to use pre-bootstrap data with different combinations using MIME multi-part file, TPL file, separate user_data module...

    ```module "user_data" {
     source = "../_user_data"

     create   = var.create
     platform = var.platform

     cluster_name        = var.cluster_name
     cluster_endpoint    = var.cluster_endpoint
     cluster_auth_base64 = var.cluster_auth_base64

     cluster_service_ipv4_cidr = var.cluster_service_ipv4_cidr

     enable_bootstrap_user_data = var.enable_bootstrap_user_data
     pre_bootstrap_user_data    = var.pre_bootstrap_user_data
}```

but no luck..

0

You must log in to answer this question.

Browse other questions tagged .