3

Context

I'm trying to deploy a containerised API service to an EKS Fargate cluster and have it service requests from external internet addresses as an over-engineered POC/learning experience. I'm running into issues when it comes to understanding how to direct network traffic to the Fargate service. My main guidance for the project are these two tutorials:

But my inexperience is showing when it comes to deciding if I should try set up an API Gateway, an Application Load Balancer, an ALB Ingress Controller, or use the AWS Load Balancer Controller, my current furthest attempts have been with the latter. I have most of the infrastructure and EKS/Fargate components working using Terraform and '.yaml' files to use with kubectl.

Setup

I have the following deploying with a single Terraform configuration that successfully deploys:

  • A VPC with private and public subnets, and both nat and vpn gateways enabled
  • An EKS cluster with coredns, kube-proxy, vpc-cni, and a single fargate profile that selects my application namespace and allocates nodes to the private subnets of the VPC.

I Have the following '.yaml' configs that are consistent between the different method attempts:

  • namespace.yaml (kind: Namespace)
  • deployment.yaml (kind: Deployment)
  • deploy_secret.yaml (kind: Secret)
  • service.yaml (kind: Service, spec:type: NodePort)

I'm happy to post the details of any of these files but this post is already really long.

The Issue

I'll just go into detail for trying to get the AWS Load Balancer Controller to work since that was the one used in the EKS Workshop tutorial, but please let me know if I should definitely be using one of the other methods. I should also preface that the issues occur with both my own application and the sample application with nothing changed.

After I deploy my infrastructure with Terraform and successfully setup and install the AWS LB Controller I create my deployment and service with kubectl. The issue begins when I try to create the Ingress object(?), the config provided looks like:

---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  namespace: my-namespace
  name: ingress-myapp
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  rules:
    - http:
        paths:
          - path: /*
            backend:
              serviceName: service-myapp
              servicePort: 80

This returns an error saying error: error validating "ingress.yaml":, I've omitted details because this is solved in this question leading to this pull request. I've reconfigured the file to follow the changed config definition, which now looks like:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-myapp
  namespace: my-namesace
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/scheme: internet-facing
spec:
  defaultBackend:
    service:
      name: service-myapp
      port:
        number: 80
  rules:
    - http:
        paths:
          - path: /*
            backend:
              service:
                name: service-myapp
                port:
                  number: 80
            pathType: ImplementationSpecific

This results in the error:

Error from server (InternalError): error when creating "ingress.yaml": Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": Post "https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1-ingress?timeout=10s": context deadline exceeded

I followed along with this post describing a similar issue, so i made sure that all my references to the LBC were using 2.4.0 but did not have an affect on the issue. I also looked into these two threads (GitHub Issue and Reddit thread) but I'm struggling to understand what the solution in these cases were.

What I am inferring is that some part of the process is unable to reach an external address when validating the new ingress object/config but I'm not sure what component is trying to make that request and whether its a permission or network issue that's preventing it. Any guidance or hints are appreciated!

1 Answer 1

0

You need to make an AWS security group for ports 443 & the ports of the pod it speaks to. (443 -> xx?)

Then add this rule to the nodes in your cluster. Your request times out because AWS removed all permissions in EKS clusters across nodes.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .