Billable Cloud Services

Billable cloud services refer to the services associated with HERE Workspace layers and the HERE Workspace Pipeline, such as those involved in storing, transferring, and computing data. As a general rule, the more data you send and receive from the HERE Workspace and the more data you store, the more you will be charged.

Cloud services are separated into the following categories:

Storage

Blob Storage

Blob storage is disk storage used to store blob data (e.g. blobs generated by versioned and index layers or stream layers where message payloads exceed 1 Megabyte (MB) per month).

The following pertains to blob storage services:

  • Stream message payloads greater than one MB and stored in blob storage with TTL less than one day will be billed for one day. Expired data in blob storage is cleared one time per day.
  • Blob storage cost is determined by recording the total size of all blobs stored one time per day.
  • Blob storage is reported per unit in Gigabytes (GB) per hour and billed per month.
  • Monthly prices represent usage over 720 hours.
  • Calculation construct:
    • Sum of all the bytes of blob data.
  • Cost is prorated in a given billing period based on usage. For example, if a versioned layer is deleted halfway through the month, the monthly cost for blob storage associated with that layer will be prorated by 50%.
  • APIs which can incur blob storage costs include:
    • Ingest
    • Publish
    • Blob

Volatile Storage

Volatile storage is allocated RAM storage for writes and reads of volatile layers.

The following pertains to volatile storage services:

  • Volatile storage cost is determined by recording the total size of the configured storage capacity, including redundancy.

Note

  • Volatile layers configured with "multi-instance" redundancy increase the storage capacity and cost by a factor of three (initial copy + 2 replicas). The configured capacity is three times the usable capacity, as data is redundantly stored three times to ensure data protection.
  • Volatile layers configured with "single-instance" redundancy do not increase storage capacity because data is not redundantly stored. Configured capacity and usable capacity are the same for "single-instance" layers.
  • Volatile storage is reported per unit in GB per hour and billed per month.
  • Monthly prices represent usage over 720 hours.
  • Calculation construct:
    • (Configured storage capacity in GB) * (Configured redundancy factor being 1 for "single-instance" or 3 for "multi-instance") = X volatile storage in GB.
  • Cost is prorated in a given billing period based on usage. For example, if a volatile layer is deleted halfway through the month, the monthly cost of the layer will be prorated by 50%.
  • APIs which can incur volatile storage costs include:
    • Config (based on the allocation of data storage capacity and redundancy)

Stream Storage

Stream storage is the allocated storage queue capacity for streaming data (consecutive messages written and read where individual message payloads are less than 1 MB).

The following pertains to stream storage services:

  • Cost is calculated by the “KB/s In” throughput configured when creating a stream layer.
  • Usage is measured as the maximum of your configured throughput.
  • Stream storage is reported per unit in GB per second per hour and billed per month.
  • Monthly prices represent usage over 720 hours.
  • Calculation construct:
    • (KB/s In) / (1024) KB to MB / (1024) MB to GB = X stream GB per second.
  • Cost is prorated in a given billing period based on usage. For example, if a stream layer is deleted halfway through the month, the monthly cost of the stream layer will be prorated by 50%.
  • APIs which can incur stream storage costs include:
    • Config (based on “KB/s In”)

Stream TTL Storage

Stream TTL storage is allocated disk storage calculated by the combination of the TTL data retention configuration (10 minutes up to 3 days) and the “KB/s In” throughput (volume) configuration set when creating a stream layer.

Note

Storage costs can be controlled by manipulating either of these values up or down.

The following pertains to stream TTL storage services:

  • Usage is measured as the maximum of your combined throughput and TTL configuration.
  • Stream TTL storage is reported per unit in GB per hour and billed per month.
  • Monthly prices represent usage over 720 hours.
  • Calculation construct:
    • (KB/s In) * (Configured TTL in minutes) * 60 seconds per minute / 1024 (KB to MB) / 1024 (MB to GB) = X stream TTL GB.
  • Costs are prorated in a given billing period based on usage. For example, if a stream layer with TTL is deleted halfway through the month, the monthly cost of the stream layer at that given TTL will be prorated by 50%.
  • APIs which can incur stream TTL storage costs per stream TTL configurations include:
    • Config (based on TTL and throughput configured per stream layer)

Metadata Storage

Metadata storage is database storage used for metadata (e.g. layer titles, coverage, tags, descriptions, and index layer key/value pairs such as HERE Tile/Time) which supports the fast search and discovery of your data.

The following pertains to metadata storage services:

  • Usage is measured as the maximum bytes of stored records per hour, calculated as the sum of data stored in indexes and in persistent storage for metadata.
  • Metadata is accessed by the Metadata API as well as the Index API.
  • Metadata storage is reported per unit in GB per hour and billed per month.
  • Monthly prices represent usage over 720 hours.
  • Cost estimation construct (estimation of minimal metadata stored):
    • Record (partition) count * 256 (typical size per record in Kilobytes (KB)) / (1024^3) bytes to GB = X metadata storage in GB.
  • Cost are prorated in a given billing period based on usage. For example, if you delete your catalog halfway through the month, the monthly cost of the metadata for that catalog will be prorated by 50%.
  • APIs which can incur metadata storage costs include:
    • Config
    • Metadata
    • Index

Transfer

Data IO

Data IO refers to the data transfer incurred when storing and accessing data in HERE Workspace, both from within and from outside of the Workspace. Data IO is also incurred when ingesting data, publishing data, and transferring data between HERE Workspace components.

Note

An exception exists when data is written to or read from a stream layer using the direct connect protocol via the Data Client Library. In this specific case, no Data IO costs are incurred.

The following pertains to Data IO services:

  • Reported per unit in gigabytes per hour and billed per month.
  • Monthly prices represent usage over 720 hours.
  • Calculation construct:
    • Write volume in GB + Read volume in GB = X Data IO GB.
  • APIs which can incur Data IO costs include:
    • Config
    • Ingest
    • Stream (when using http protocol)
    • Publish
    • Metadata
    • Blob
    • Volatile Blob
    • Index
    • Query
    • Notifications
    • API Lookup

Log Search IO

Log search IO refers to the data transfer incurred when pipeline-generated log information is written and indexed for debugging purposes.

The following pertains to log search IO services:

  • Reported per unit in GB per hour and billed per month.
  • Monthly prices represent usage over 720 hours.
  • Calculation construct:
    • Total number of bytes written to the index.
  • Costs are not incurred via API requests but via optional developer code within a pipeline JAR file which writes to “std-out".

Pipeline IO

Pipeline IO refers to the data transfer incurred when a pipeline reads or writes data to or from the public internet. Pipeline IO also occurs when pipelines make requests over the internet to, for example, confirm the latest XML parsing protocols.

Note

Only access to destinations on port 443 are permitted.

The following pertains to Pipeline IO services:

  • Reported per unit in GB per hour and billed per month.
  • Monthly prices represent usage over 720 hours.
  • Measured at the TCP level.
  • Calculation construct:
    • Total number of bytes written to or read from pipelines over the internet.
  • Costs are not incurred via API requests but via optional developer code within a pipeline JAR file which connects to the public internet for read and write.

Compute

Compute Core

Compute core refers to CPU core hours used by pipelines and notebooks when processing data in HERE Workspace.

The following pertains to compute core services:

  • Reported and billed per unit in core hours per hour (at a resolution of seconds).
  • 1 compute unit = 1 core and 7 GB RAM
  • Calculation construct:
    • ((Worker Size – in compute Unit count) * (Worker Count) + (Master Size – in compute Unit count)) * # of hours run = pipeline core hours.
  • APIs which incur compute core costs include:
    • Pipelines

Compute RAM

Compute RAM refers to RAM hours used by pipelines and notebooks when processing data in HERE Workspace.

The following pertains to compute RAM services:

  • Reported and billed per unit in GB-hours per hour (at a resolution of seconds).
  • 1 compute unit = 1 core and 7 GB RAM
  • Calculation construct:
    • ((Worker Size – in compute unit count) * (Worker Count) + (Master Size – in compute Unit count)) * 7 Cluster Cores to Cluster RAM * # of hours run = Pipeline RAM Hours.
  • APIs which incur compute RAM costs include:
    • Pipelines

results matching ""

    No results matching ""