Method: Aws::Glue::Types::StartJobRunRequest#max_capacity

Defined in:
lib/aws-sdk-glue/types.rb

#max_capacityFloat

For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the [ Glue pricing page].

For Glue version 2.0+ jobs, you cannot specify a ‘Maximum capacity`. Instead, you should specify a `Worker type` and the `Number of workers`.

Do not set ‘MaxCapacity` if using `WorkerType` and `NumberOfWorkers`.

The value that can be allocated for ‘MaxCapacity` depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:

  • When you specify a Python shell job (‘JobCommand.Name`=“pythonshell”), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.

  • When you specify an Apache Spark ETL job (‘JobCommand.Name`=“glueetl”) or Apache Spark streaming ETL job (`JobCommand.Name`=“gluestreaming”), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.

[1]: aws.amazon.com/glue/pricing/

Returns:

  • (Float)


25810
25811
25812
25813
25814
25815
25816
25817
25818
25819
25820
25821
25822
25823
25824
25825
25826
# File 'lib/aws-sdk-glue/types.rb', line 25810

class StartJobRunRequest < Struct.new(
  :job_name,
  :job_run_queuing_enabled,
  :job_run_id,
  :arguments,
  :allocated_capacity,
  :timeout,
  :max_capacity,
  :security_configuration,
  :notification_property,
  :worker_type,
  :number_of_workers,
  :execution_class,
  :execution_role_session_policy)
  SENSITIVE = []
  include Aws::Structure
end