2

I've got a Terraform deployment that deploys a Docker image into ECS Fargate. It attaches an EFS volume to the container. When I SSH into the container, I see the volume mounted, but I am unable to write to it. All of the POSIX permissions seem right.

Here's an illustration of the problem:

$ ssh -i ~/.ssh/_inletchef/ifsudo.pem [email protected] 
Last login: Fri Apr 14 21:45:27 2023 from ip-10-0-3-140.us-west-1.compute.internal
-bash-4.2$ sudo -i
-bash-4.2# mount | grep data
127.0.0.1:/ on /mnt/data type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,port=20403,timeo=600,retrans=2,sec=sys,clientaddr=127.0.0.1,local_lock=none,addr=127.0.0.1)
-bash-4.2# ls -ld /mnt/data
drwxr-xr-x 2 root root 6144 Apr 14 02:33 /mnt/data
-bash-4.2# touch /mnt/data/xxxx touch: cannot touch ‘/mnt/data/xxxx’: Permission denied

Here, I SSH into the container, become root, show that the volume is mounted, show the permissions on the mount directory, and attempt to write a new file into that directory, and see that that fails.

I am running as 'root', and 'root' owns the mount point, and the mount point is writeable by the owner. So the POSIX permissions seem that they should allow me to write to that directory.

Because of the above, I'm guessing that my EFS volume is not writeable for some reason that is independent of the POSIX owner and perm flags. But I can't figure out how my configuration causes the volume to be read only.

Can anyone tell me why I am unable to write to my EFS volume given this setup?

Here are the relevant bits of my Terraform setup:

resource "aws_efs_file_system" "main" {
  count = (length(var.efs_mount_point) == 0)? 0 : 1
  tags = {
    Name = "ecs-efs-fs-${var.instance_name}"
  }
}

resource "aws_efs_mount_target" "main" {
   count = (length(var.efs_mount_point) == 0)? 0 : "${length(module.aws_account.private_subnets.ids)}"
   file_system_id  = "${aws_efs_file_system.main[0].id}"
   subnet_id = "${element(module.aws_account.private_subnets.ids, count.index)}"
}

resource "aws_efs_access_point" "main" {
  count = (length(var.efs_mount_point) == 0)? 0 : 1
  file_system_id = aws_efs_file_system.main[0].id
  posix_user {
    uid = 1000
    gid = 1000
  }
  root_directory {
    path = "/"
    creation_info {
      owner_uid = 1000
      owner_gid = 1000
      permissions = 777
    }
  }
}

resource "aws_ecs_task_definition" "main" {
  family                   = var.instance_name
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]

  ...

  dynamic "volume" {
    content {
      name = "efs_volume"
      efs_volume_configuration {
        file_system_id = aws_efs_file_system.main[0].id
        transit_encryption = "ENABLED"
        authorization_config {
          access_point_id = aws_efs_access_point.main[0].id
          iam = "ENABLED"
        }
      } 
    }
  }

  container_definitions = jsonencode([
    {
        ...

        "mountPoints": [
          {
              "containerPath": "${var.efs_mount_point}"
              "sourceVolume": "efs_volume"
              read_only: false
          }
        ],

        ...
   }
  ])
}
2
  • Did you figure it out? Having the same issue and my terraform code is almost the same.
    – the-lay
    Commented Sep 8, 2023 at 19:31
  • 1
    Ah - I've tried to set path of root_directory of the access point to something else than root, and now it works!
    – the-lay
    Commented Sep 8, 2023 at 19:48

0

You must log in to answer this question.

Browse other questions tagged .