Skip to main content
Release notes 16 min read

HERE Workspace & Marketplace 2.19 release

HERE Workspace & Marketplace 2.19 release

Highlights

 

Manage framework agnostic Machine Learning life-cycles on the HERE platform

An MLflow plugin is now available which allows data scientists to manage their Machine Learning (ML) life-cycle on the HERE platform. Users can manage their ML experiments and share that with other users of the platform, or make it available on the Marketplace. Data scientists can use any ML framework on any ML cloud platform for training and can choose to manage the ML artifacts on the platform. The MLflow plugin can be used while training the ML model or users can upload an already trained model to the platform.

ML Flow plugin features include:

  • Tracking experiments to record and compare parameters and results
  • Packaging ML code in a reusable, reproducible form in order to share with other data scientists
  • Providing a central model store to collaboratively managing the full life-cycle of an MLflow model including model versioning, stage transitions, and annotations through a catalog HRN.
  • Build a docker locally for exposing the model as a service for inference. Please note, this feature is available only for testing the inference service locally in this release.

Please, refer to the MLflow plugin section of the HERE Data SDK for Python Setup Guide for installation instructions.

 

Easily serialize Location Library geometry objects to GeoJSON

The new FeatureCollection class in the Location Library lets you create GeoJSON files from your geometry objects. Simply create GeoJSON point markers, line strings, arrows to highlight direction, line strings and point markers to represent point and range based attributes and custom markers. GeoJSON is a standard format for geo-spatial data and can be displayed with many open source and commercial tools. You may also want to publish the result in partitions of a versioned layer and inspect your data in the platform via the Data Inspector.

 

Set granular Stream layer throughput configurations to optimize performance and cost

Set more granular stream layer throughput configurations so that you can better optimize your stream layer storage based on your use case and budget needs.  When creating a Stream layer, you are only charged for the "In" throughput.  With this release, you can create new Stream layers with "In" throughput in 100 KB/s increments.  Please see more specific details about this change here.

Note: 

  • Going forward and in order to maintain backward compatibility with older Data Client Library versions, you will see Stream throughput numbers in MB/s and rounded down when requesting layer configuration information using older clients.  Example:  If you are using a Data Client Library version less than the latest version released with this HERE Workspace release AND you request Stream layer throughput configuration AND you set your Stream layer "In" throughput to 100 KB/s, you will see a response of "0".  This response is explained in HERE Workspace documentation here.  Conversely, If you are using the latest SDK version available with this release, you will see the accurate "In" throughput of 100 KB/s for this example.
  • Please, take note of the corresponding deprecation announcement at the end of these release notes.

 

Improved control for editing of Grafana monitoring dashboards

To provide better control for editing monitoring dashboards and alerts, users in your Org can be granted the "edit monitoring dashboards" role by their Org Admin. Only Org Admins and users with this role have the right to make changes to the dashboards and view alerts.

 

Analyze Flink metrics to monitor your data archive stream processing

The Data Archiving Library now enables you to define and expose Flink processing metrics in Grafana dashboards so that you can monitor the health and activity of your data archive stream processing. Have a look at our documentation to learn which metrics are available.

 

Changes, Additions and Known Issues

 

SDKs and tools

Go to the HERE platform changelog to see details of all changes to our CLI, the Data SDKs for Python, TypeScript, C++, Java and Scala as well as to the Data Inspector Library.

 

Web & Portal

Issue: Pipeline Templates can't be deleted from the Portal UI.
Workaround: Use the CLI or API to delete Pipeline Templates.

Issue: In the Portal, new jobs and operations are not automatically added to the list of jobs and operations for a pipeline version while the list is open for viewing.
Workaround: Refresh the Jobs and Operations pages to see the latest job or operation in the list.

 

Projects & Access Management

Issue: A finite number of access tokens (~250) are available for each app or user. Depending on the number of resources included, this number may be smaller.
Workaround: Create a new app or user if you reach the limitation.

Issue: A finite number of permissions are allowed for each app or user in the system across all services. It will be reduced depending on the inclusion of resources and types of permissions.

Issue: All users and apps in a group are granted permissions to perform all actions on any pipeline associated with that group. There is no support for users or apps with limited permissions. For example, you cannot have a reduced role that can only view pipeline status, but not start and stop a pipeline.
Workaround: Limit the users in a pipeline's group to only those users who should have full control over the pipeline.

Issue: When updating permissions, it can take up to an hour for changes to take effect.

Issue: Projects and all resources in a Project are designed for use only in Workspace and are unavailable for use in Marketplace. For example, a catalog created in a platform Project can only be used in that Project. It cannot be marked as "Marketplace ready" and cannot be listed in the Marketplace.
Workaround: Do not create catalogs in a Project when they are intended for use in both Workspace and Marketplace.

 

Data

Added: You can upload data directly from your local machine onto a data layer using the "upload data" button located in the data layer user interface.

Issue: The "Upload data" button is hidden when the layer has the "Content encoding" field set to "gzip".

Workaround: Files (including zip files) can still be uploaded and downloaded as long as the "Content encoding" field is set to "uncompressed".

Issue: The changes released with 2.9 (RoW) and with 2.10 (China) to add OrgID to Catalog HRNs and with 2.10 (Global) to add OrgID to Schema HRNs could impact any use case (CI/CD or other) where comparisons are performed between HRNs used by various workflow dependencies.  For example, requests to compare HRNs that a pipeline is using vs what a group, user or app has permissions to will  result in errors if the comparison is expecting results to match the old HRN construct.  With this change, Data APIs will return only the new HRN construct which includes the OrgID, e.g. olp-here…, so a comparison between the old HRN and the new HRN will be unsuccessful.   

  • Reading from and writing to Catalogs using old HRNs is not broken and will continue to work until October 30, 2020.
  • Referencing old Schema HRNs is not broken and will work in perpetuity.

Workaround: Update any workflows comparing HRNs to perform the comparison against the new HRN construct, including OrgID.

Issue: Searching for a schema in the Portal using the old HRN construct will return only the latest version of the schema.  The Portal will not show older versions tied to the old HRN.

Workaround: Search for schemas using the new HRN construct OR lookup older versions of schemas by old HRN construct using the OLP CLI.

Issue: Visualization of Index layer data is not yet supported

 

Pipelines

Deprecation Reminder: Batch-2.0.0 environment deprecation period is now over and it will be removed soon. We recommend that you migrate your Batch Pipelines to the Batch-2.1.0 run-time environment to utilize the latest functionality and improvements.

Issue: A pipeline failure or exception can sometimes take several minutes to respond.

Issue: Pipelines can still be activated after a catalog is deleted.
Workaround: The pipeline will fail when it starts running and will show an error message about the missing catalog. Re-check the missing catalog or use a different catalog.

Issue: If several pipelines are consuming data from the same Stream layer and belong to the same group (pipeline permissions are managed via a group), then each of those pipelines will only receive a subset of the messages from the stream. This is because, by default, the pipelines share the same Application ID.
Workaround: Use the Data Client Library to configure your pipelines to consume from a single stream: If your pipelines/applications use the Direct Kafka connector, you can specify a Kafka Consumer group ID per pipeline/application.  If the Kafka consumer group IDs are unique, the pipelines/applications will be able to consume all the messages from the stream.
If your pipelines use the HTTP connector, we recommend you to create a new group for each pipeline/application, each with its own Application ID.

 

Marketplace (Not available in China)

Issue: There is no throttling for the beta version of the External Service Gateway. When the system is overloaded, service will slow down across the board for all consumers who are reading from the External Service Gateway.

Workaround: Contact HERE technical support for help.

Issue: When the Technical Accounting component is busy, the server can lose usage metrics.
Workaround: If you suspect you are losing usage metrics, contact HERE technical support for assistance rerunning queries and validating data.

Issue: Projects and all resources in a Project are designed for use only in Workspace and are unavailable for use in Marketplace. For example, a catalog created in a Platform Project can only be used in that Project. It cannot be marked as "Marketplace ready" and cannot be listed in the Marketplace.
Workaround: Do not create catalogs in a Project when they are intended for use in the Marketplace.

 

Summary of active deprecation notices across all components

No.

Feature Summary

Deprecation Period Announced (Platform Release)

Deprecation Period Announced (Month)

Deprecation Period End

1

OrgID added to Catalog HRN (RoW)

2.9 (ROW)

2.10 (China)

November 2019

October 30, 2020

 

Deprecation Summary:

Catalog HRNs without OrgID will no longer be supported in any way after October 30, 2020.

  • Referencing catalogs and all other interactions with REST APIs using the old HRN format without OrgID OR by CatalogID will stop working after October 30, 2020.
    • Please ensure all HRN references in your code are updated to use Catalog HRNs with OrgID before October 30, 2020 so your workflows continue to work.
  • HRN duplication to ensure backward compatibility of Catalog version dependencies resolution will no longer be supported after October 30, 2020.
  • Examples of old and new Catalog HRN formats:
    • Old (without OrgID/realm): hrn:here:data:::my-catalog
    • New (with OrgID/realm): hrn:here:data::OrgID:my-catalog

4

Batch-2.0.0 run-time environment for Pipelines

2.12

February 2020

August 19, 2020

 

Deprecation Summary:

Deprecation period is now over and Batch-2.0.0 will be removed soon. Pipelines still using it will be canceled. We recommend that you migrate your Batch Pipelines to the Batch-2.1.0 run-time environment to utilize the latest functionality and improvements. For more details about migrating an existing Batch pipeline to the new Batch-2.1.0 run-time environment, see Migrate Pipeline to new Run-time Environment.

5

Schema validation to be added

2.13

March 2020

November 30, 2020

 

Deprecation Summary:

For security reasons, the platform will start validating schema reference changes in layer configurations as of November 30, 2020. Schema validation will check if the user or application trying to make a layer configuration change indeed has at least read access to the existing schema associated with that layer (i.e. a user or application cannot reference or use a schema they do not have access to). If the user or application does not have access to a schema associated with any layer after this date, any attempt to update any configurations of that layer will fail until the schema association or permissions are corrected. Please ensure all layers refer only to real, existing schemas, or contain no schema reference at all before November 30, 2020. It is possible to use the Config API to remove or altogether change schemas associated with layers to resolve these invalid schema/layer associations. Also, any CI/CD jobs referencing non-existing or non-accessible schemas will need to be updated by this date or they will fail.

6

Customizable Volatile layer storage capacity and redundancy configurations

2.14

April 2020

October 30, 2020

 

Deprecation Summary:

The Volatile layer configuration option to set storage capacity as a "Package Type" will be deprecated within (6) months or by October 30, 2020. All customers should deprecate their existing volatile layers and create new volatile layers with these new configurations within (6) months of this feature release or by October 30, 2020.

7

Stream-2.0.0 run-time environment for Pipelines

2.17

July 2020

February 1, 2021

 

Deprecation Summary:

Stream-2.0.0 (with Apache Flink 1.7.1) run-time environment is now deprecated. Existing Stream pipelines that use the Stream-2.0.0 run-time environment will continue to operate normally until February 1, 2021. During this period, Stream-2.0.0 run-time environment will receive security patches only. For this period, to continue developing pipelines with the Stream-2.0.0 environment, please use Platform SDK 2.16 or older. After February 1, 2021 the Stream-2.0.0 run-time environment will be removed and the pipelines still using it will be canceled. We recommend that you migrate your Stream Pipelines to the new Stream-3.0.0 run-time environment to utilize the latest functionality and improvements. For more details about migrating an existing Stream pipeline to the new Stream-3.0.0 run-time environment, see Migrate Pipeline to new Run-time Environment. For more details about our general support for Apache Flink, please see Stream Pipelines - Apache Flink Support FAQ.

 

8

‘pipeline_jobs_canceled’ metric in Pipeline Status Dashboard

2.17

July 2020

February 1, 2021

 

Deprecation Summary:

‘pipeline_jobs_canceled’ metric used within the Pipeline Status Dashboard is now deprecated because it was tied to the Pause functionality and caused confusion. The metric and its explanation will be available to use until February 1, 2021. After that date, the metric will be removed.

9

Stream throughput configuration changes from MB/s to KB/s

2.19

September 2020

March 31, 2021

 

Deprecation Summary:

Support for Stream layers with configurations in MB/s will be deprecated in (6) months or by March 31, 2021.

 

After March 31, 2021 only KB/s throughput configurations are supported. This means also that Data Client Library and CLI versions included in SDK 2.18 and earlier can no longer be used to create stream layers after this date because these versions do not support configuring stream layers in KB/s.

 

Torsten Linz

Torsten Linz

Have your say

Sign up for our newsletter

Why sign up:

  • Latest offers and discounts
  • Tailored content delivered weekly
  • Exclusive events
  • One click to unsubscribe