1

I have a web application offloading some resource-intensive tasks to AWS Batch, backed by Fargate.

It's a very simple setup - a single queue, a single job definition, and a single compute environment. The compute environment has a limit of Maximum vCPUs: 128 and no Minimum vCPU.

The job complexity can be estimated upfront, so when the job is submitted, the application overrides resourceRequirement params to meet the estimated computing power needs. An equivalent CLI command would be:

aws batch submit-job \
  --job-name "test-job-1" \
  --job-definition "staging-batch" \
  --job-queue "staging-normal-priority" \
  --container-overrides 'resourceRequirements=[{type=VCPU,value=16},{type=MEMORY,value= 32768}]'

When I look at the job's container details in the web console, I can see the requested values: vCPUs 16.0, Memory 32768, which I assume means the command syntax is correct.

However, if I modify the job's command to curl ${ECS_CONTAINER_METADATA_URI_V4} the returned response has "Limits"=>{"CPU"=>2} consistently. 1 CPU = 2 vCPUs, so I'd expect this to be "CPU"=>8. "Applied account-level quota value" for "Fargate On-Demand vCPU resource count" is 4000 (AWS default is 6), which should mean there's no quota-induced limiting.

Is there a way to get the desired number of vCPUs on Fargate, or does it mean AWS does not enough on-demand resources to allocate and is therefore consistently giving me 2CPUs?

Edit: If I run ruby's Etc.nprocessors (based on sysconf(_SC_NPROCESSORS_ONLN) / sched_getaffinity ) I get 16, so I guess the 2 CPU limit Fargate box gives me is a red herring. Then the question becomes, why is it reporting the limit of 2?

0

You must log in to answer this question.

Browse other questions tagged .