For a pipeline, you must choose a certain number of units to allocate the resources for a Supervisor and Worker:
For each Supervisor or Worker (aka supervisor-units and worker-units), the following size limits apply:
A pipeline can contain:
HERE platform limits are still applicable. A Pipeline can only consume a maximum of 200 CPU and 1.4 TB RAM. In other words, Size of Supervisor + (Size of Each Worker * Number of Workers) ≤ 200 CPU AND 1.4 TB RAM. If you experience any resource provisioning issues within these limits, you should contact HERE Support/Services.
For a Spark batch pipeline, choose the following units for resource allocation of the Supervisor (Spark Driver) and Worker (Spark Executor):