Skip to main content
Release notes 23 min read

HERE Workspace & Marketplace 2.10 release

HERE Workspace & Marketplace 2.10 release

Highlights

Simplify user administration, resource management, and usage tracking with Projects

Create projects and manage access to a project for development of new catalog, schema or pipeline resources. Usage of these project resources is automatically tracked and summarized on the credit usage report. With this release, any new catalogs, schemas, and pipelines created in a project are accessible only by that project. However, the catalogs that are explicitly shared with customers from the OLP-HERE Org may be referenced from any project.

What are Projects and why are they useful?

    • A project is a collection of resources and optimizes data access between resources
    • Projects represent trust boundaries within an Org
    • Users and apps with project access have automatic access to project resources
    • Projects have automatic tracking of usage by Project-ID

 

  •  

 

    • Projects enable access control for different stages of production
    • Use projects to restrict access to production-ready catalogs and pipelines.
    • Project admins control which users, apps or groups have project access

 

How do I use Projects?

    • Start by creating a new project with the Projects Manager available from the Launcher in the HERE Platform.

More information about using Projects can be found in the Teams and Permissions User Guide

 

General Availability of Location Services in Workspace

Location Services APIs (Search, Routing, Transit, and Vector Tiles Service), previously available via the 2.6 release in closed beta, are now available to all HERE Workspace users. 

  • To learn more about these Location Services, select the Services in the Launcher:

  • Use of these APIs will be charged at a $/€ rate per transaction or as otherwise specified in your agreement with HERE.
 

Service

Description

Unit

Price Per Unit  ($/€)

Location Services

Search

One box search, forward geocoder, reverse geocoder, autosuggest, places ID lookup

Transaction**

0.0005

Routing

Vehicle, pedestrian, truck, transit

Transaction**

0.0005

Transit

Next departure and station search and intermodal routing

Transaction**

0.0005

Rendering

Vector Tiles

Transaction**

0.0005

  • **Transaction means one API Request for all Open Location Services, except as follows:
Service Transaction Definition
Search Autosuggest: 10 API requests equal 1 Transaction
Rendering Vector Tiles: 5 API requests equal 1 Transaction.
Transit Intermodal: One Transaction is counted for each returned alternative Intermodal Route(s) set by the Customer for the intermodal_alternatives parameter in the API request. If the system determines that there are no available Park and Ride routes, the Request is still counted as one Transaction.

 

Run Batch Pipelines on a recurring time-schedule

Batch Pipeline Versions can now be scheduled to run based on time. This eliminates the need of an external scheduling system and the costs associated with it. The time-schedule can be set via a CRON expression using the OLP CLI, Web Portal or the Pipeline API. For example, "0 0 * * *" represents "Run once a day at midnight". See Activating a Pipeline Version for more details.

 

Inspect Historical (Completed or Failed) Batch Pipelines via Spark UI

Use the Spark UI to inspect the historical (completed or failed) batch pipelines using the Batch-2.1.x run-time environment. The Spark UI provides the execution and performance details of a Spark job in order to quickly fine-tune or troubleshoot the pipeline configuration or logic and decrease time to production. To access the Spark UI, go to the Jobs tab of a pipeline version and click the link to open the Spark UI from a job, or use the OLP CLI to copy the link from the job details.

NOTE: The batch-2.1.0 environment includes the following 2 new libraries:

  1. "com.amazonaws.aws-java-sdk: 1.7.4"
  2. "org.apache.hadoop.hadoop-aws: 2.7.3"

These libraries are not mentioned in the list of libraries within the 2.10 SDK but are still available. This will be fixed in a future release.

 

More efficiently query Index storage

Spatial query improvements have been made to more efficiently query Index layers.  You can now submit location-based data queries via any size bounding box, more efficiently querying large data sets without receiving errors. Based on customer feedback, we've also improved documentation to specify current limits on the number of comma separated values which can be entered in a query via the "=in=" operator.  Both changes represent usability enhancements, which we hope incrementally improve your Index storage experience.

 

Big Data connectors

The Flink connector integrates proprietary catalogs and layers with the industry-standard format of Flink (TableSource), allowing you to leverage search, filter, map, and sort capabilities offered by the standard Apache Flink framework. In addition to previously released functions, the Flink connector now supports:

  • Read and Write operations on Index layers.
  • Read operations on Versioned layers.

With these updates, you can spend less time writing low-level pipeline/data integration code and more time on your use case-specific business logic.

 

Location Referencing

Location Library support for Location Referencing has started! If you are ingesting traffic data (e.g. RTTI) that are encoded in TISA TMC/TPEG2 format, you will be able to have the events and their locations converted to usable Stable Topology Segment IDs so they are suitable to be processed in your pipelines. Naturally, the TPEG2 binary-encoded container that wraps the incoming data stream will also be decoded so that the conversion can take place.

 

Simplified attribute access

Beginning with this release more HERE Map Content attributes will be added to the Optimized Map for Location Library enabling fast and simplified direct access to these attributes via the Location Library. The attributes will be added iteratively over time. In 2.10, we added Functional Class, which describes the level of importance of a road in terms of relative traffic volume, e.g. motorway, collector, residential street, etc.

 

Support for Scalable Vector Graphic (SVG) icons in Data Inspector

The Visualization library offers a new way to customize data visualization by adding relevant SVG (Scalable Vector Graphic) icons to your data visualization, such as weather, incidents, points of interest, and more. You can define custom icons for "Point/MultiPoint" features by configuring a set of specific GeoJSON style properties. The "marker-image" property accepts a valid data URL, making it possible to display the following: base64-encoded PNG images, SVGs, and external images. Images can also be referenced by name, reducing the overall GeoJSON size. Icons size and position are configurable through custom renderer plugins.

 

Status history now available on status.here.com/history

A new tab, Status history, has been added to status.here.com. Details about past closed events can be found on this page. Descriptions for each status have also been added to status.here.com when mousing over the icons at the top of the page.

 

Changelog

We have introduced a changelog for HERE Platform APIs. The language is technical and gives developers a clear view on the specific API changes, additions, fixes, deprecations and known issues. In contrast to the Release Announcements found here, the changelog provides granular insights of individual API changes rather than the discovery of new end to end features. Also, the changelog is a near real-time view of the changes to each API, whereas the Release Announcements are distributed on a monthly cadence. For now, it will cover the OLP CLI and the SDK for Java and Scala - essentially replacing our previous SDK Release Notes. Next year, we will continue adding more APIs to it.

 

Changes, Additions and Known Issues

SDK for Java and Scala

To read about updates to the SDK for Java and Scala, please visit the SDK Release Notes.

 

Web & Portal

Issue: The custom run-time configuration for a Pipeline Version has a limit of 64 characters for the property name and 255 characters for the value.
Workaround: For the property name, you can define a shorter name in the configuration and map that to the actual, longer name within the pipeline code. For the property value, you must stay within the limitation.

Issue: Pipeline Templates can't be deleted from the Portal UI.
Workaround: Use the CLI or API to delete Pipeline Templates.

Issue: In the Portal, new jobs and operations are not automatically added to the list of jobs and operations for a pipeline version while the list is open for viewing.
Workaround: Refresh the Jobs and Operations pages to see the latest job or operation in the list.

 

Account & Permissions

Issue: A finite number of access tokens (~ 250) are available for each app or user. Depending on the number of resources included, this number may be smaller.
Workaround: Create a new app or user if you reach the limitation.

Issue: Only a finite number of permissions are allowed for each app or user in the system across all services. It will be reduced depending on the inclusion of resources and types of permissions.

Issue: All users and apps in a group are granted permissions to perform all actions on any pipeline associated with that group. There is no support for users or apps with limited permissions. For example, you cannot have a reduced role that can only view pipeline status, but not start and stop a pipeline.

Workaround: Limit the users in a pipeline's group to only those users who should have full control over the pipeline.

Issue: When updating permissions, it can take up to an hour for changes to take effect.

 

Data

Changed:  Now rolled out in China: OrgID is added to Catalog HRNs to create greater isolation and protection between individual organizations. This change also supports the ability for Org Admins to manage all catalogs in their Orgs. With this release, Catalog HRNs updated to include Org information are listed in the OLP Portal and are returned by the Config API.  Data Client Library versions higher than 0.1.394 will automatically and transparently deal with this change. Referencing catalogs by old HRN format without OrgID OR by CatalogID will continue to work for (6) months or until June 30, 2020. After June 30, 2020, it is important to note that Data Client Library versions 0.1.394 and lower will stop working. 

Note: 

  • Always use API Lookup in all workflows to retrieve catalog specific baseURL endpoints. Using API Lookup will mitigate impacts to your workflows from this change or any other changes in the future when baseURL endpoints change.
  • With this change, API Lookup will return baseURLs with OrgID in HRNs for all APIs where previously API Lookup returned baseURLs with CatalogID or HRNs without OrgID.
  • Directly using HRNs or legacy CatalogIDs or constructing baseURLs with them in an undocumented way will lead to broken workflows and will no longer be supported after June 30, 2020.

Changed: OrgID is added to existing Schema HRNs and will be added to newly created Schema HRNs based on the Organization ID of the user creating a schema via the maven archetype tool in the OLP SDK. Including OrgID in the Schema HRN provides greater isolation and protection of data artifacts between organizations. This change also supports the ability for Org Admins to manage all schemas in their Orgs. References to pre-existing schemas created before this feature release will work with either new or old HRNs so those references are not impacted. New schemas created as of this feature release will support both HRN constructs for a period of (6) months or until June 30, 2020. After this time, only the new HRN construct will be supported. Schemas listed in the Portal and elsewhere will be listed using the new HRN construct with OrgID.

Separately and as part of this feature release, Schema Group IDs will be reserved to the OrgID which published the first schema to the platform using that Group ID so other organizations cannot publish artifacts using the same Group ID.

Fixed: The Data Archiving Library supports the latest CRC and Digest features released for Index layers. Using the Data Archiving Library, the CRC and Digest calculations are automatically calculated if these layer configurations are selected in the Index layer.

Issue: Versions of the Data Client Library prior to 2.9 did not compress or decompress data correctly per configurations set in stream layers. We changed this behavior in 2.9 to strictly adhere to the compression setting in the stream layer configuration but when doing so, we broke backward compatibility wherein data ingested and consumed via different Data Client Library versions will likely fail. The Data Client LIbrary will throw an exception and. depending upon how your application handles this exception, could lead to an application crash or downstream processing failure. This adverse behavior is due to inconsistent compression and decompression of the data driven by the different Data Client Library versions. 2.10 introduces more tolerant behavior which correctly detects if stream data is compressed and handles it correctly.

Workaround: In the case where you are using compressed stream layers and streaming messages smaller than 2MB, use the 2.8 SDK until you have confirmed that all of your customers are using at least the 2.10 SDK where this Data Client Library issue is resolved, then upgrade to the 2.10 version for the writing aspects of your workflow. 

Deprecated: As of June 30, 2020, all data streaming related workflows must be updated to use at least SDK version 2.9.  The Data Client Library behavior as it relates to compressing and decompressing streaming data will strictly adhere to stream layer configuration settings.

Issue: The changes released with 2.9 (RoW) and with 2.10 (China) to add OrgID to Catalog HRNs and with 2.10 (Global) to add OrgID to Schema HRNs could impact any use case (CI/CD or other) where comparisons are performed between HRNs used by various workflow dependencies.  For example, requests to compare HRNs that a pipeline is using vs what a Group, User or App has permissions to will  result in errors if the comparison is expecting results to match the old HRN construct.  With this change, Data APIs will return only the new HRN construct which includes the OrgID (e.g. olp-here…) so a comparison between the old HRN and the new HRN will be unsuccessful.   

  • Also: The resolveDependency and resolveCompatibleDependencies methods of the Location Library may stop working in some cases until this known issue is resolved.
  • Reading from and writing to Catalogs using old HRNs is not broken and will continue to work for (6) months.
  • Referencing old Schema HRNs is not broken and will work into perpetuity.

Workaround: Update any workflows comparing HRNs to perform the comparison against the new HRN construct, including OrgID.

Issue: Searching for a schema in the Portal using the old HRN construct will return only the latest version of the schema.  Portal currently will not show older versions tied to the old HRN.

Workaround: Search for schemas using the new HRN construct OR lookup older versions of schemas by old HRN construct using the OLP CLI.

Issue: Visualization of Index Layer data is not yet supported.

 

Pipelines

Changed: The Pipeline Developer Guide and Pipeline User Guide have been merged into a single guide, Pipelines Developer's Guide, to make it more task-oriented, eliminate duplication, and to improve the organization of the information.

Issue: A pipeline failure or exception can sometimes take several minutes to respond.

Issue: Pipelines can still be activated after a catalog is deleted.
Workaround: The pipeline will fail when it starts running and will show an error message about the missing catalog. Re-check the missing catalog or use a different catalog.

Issue: If several pipelines are consuming data from the same stream layer and belong to the same Group (pipeline permissions are managed via a Group), then each of those pipelines will only receive a subset of the messages from the stream. This is because, by default, the pipelines share the same Application ID.
Workaround: Use the Data Client Library to configure your pipelines to consume from a single stream: If your pipelines/applications use the Direct Kafka connector, you can specify a Kafka Consumer Group ID per pipeline/application.  If the Kafka consumer group IDs are unique, the pipelines/applications will be able to consume all the messages from the stream.
If your pipelines use the HTTP connector, we recommend you to create a new Group for each pipeline/application, each with its own Application ID.

Issue: The Pipeline Status Dashboard in Grafana can be edited by users. Any changes made by the user will be lost when updates are published in future releases because users will not be able to edit the dashboard in a future release.
Workaround: Duplicate the dashboard or create a new dashboard.

Issue: For Stream pipeline versions running with the high-availability mode, in a rare scenario, the selection of the primary Job Manager fails.
Workaround: Restart the stream pipeline.

Issue: When a paused Batch pipeline version is resumed from the Portal, the ability to change the execution mode is displayed but the change actually doesn't happen. This functionality is not supported yet and will be removed soon.

 

Marketplace

Issue: Users do not receive stream data usage metrics when reading or writing data from Kafka Direct.
Workaround: When writing data into a stream layer, you must use the ingest API to receive usage metrics. When reading data, you must use the Data Client Library, configured to use the HTTP connector type, to receive usage metrics and read data from a stream layer.

Issue: When the Technical Accounting component is busy, the server can lose usage metrics.
Workaround: If you suspect you are losing usage metrics, contact HERE technical support for assistance rerunning queries and validating data.

 

SDK for Python

Issue: Currently, only MacOS and Linux distributions are supported.
Workaround: If you are using Windows OS, we recommend that you use a virtual machine.

 

Visualization Library

Changed: UI changes in the Data Inspector's Status Bar positioning and behavior. The Status Bar has been moved into the Toolbar, thus accentuating its relevance not only for Map View but also for other Data Inspector events. Errors and warnings are now more discoverable and better communicated, making your Visualization Library experience a bit more usable and productive.

Changed: Custom renderer plugin development has been improved due to a series of enhancements to the Data Inspector's Plugin Editor, including its usability and performance. In particular, we updated the version of the Monaco Editor module and fixed several non-critical bugs, which, nonetheless, caused certain user-experience annoyances.

Fixed: An issue with decoding volatile layer data in the Portal's Data Inspector.

 

Summary of active deprecation notices across all components

No.

Feature Summary

Deprecation Period Announced (Platform Release)

Deprecation Period Announced (Month)

Deprecation Period End

1

Stream-1.5.x (with Apache Flink 1.2.1) for Pipelines

OLP 2.6

August 2019

February 1, 2020

 

Deprecation Summary:

Existing Stream pipelines that use the Stream-1.5.x run-time environment will continue to operate normally until February 1, 2020. During this period, Stream-1.5.x run-time environment will receive security patches only. For this period, to continue developing pipelines with the Stream-1.5.x environment, please use OLP SDK 2.5 or older. After February 1, 2020 we will remove the Stream-1.5.x run-time environment and the pipelines still using it will be canceled. We recommend that you migrate your Stream Pipelines to the new Stream-2.0.0 run-time environment to utilize the latest functionality and improvements. For more details about migrating an existing Stream pipeline to the new Stream-2.0.0 run-time environment, see Migrate Pipeline to new Run-time Environment.

2

Batch-1.5.x (with Apache Spark 2.1.1) for Pipelines

OLP 2.6

August 2019

February 1, 2020

 

Deprecation Summary:

Existing Batch pipelines that use the Batch-1.5.x run-time environment will continue to operate normally until February 1, 2020. During this period, Batch-1.5.x run-time environment will receive security patches only. For this period, to continue developing pipelines with the Batch-1.5.x environment, please use OLP SDK 2.5 or older. After February 1, 2020 we will remove the Batch-1.5.x run-time environment and the pipelines still using it will be canceled. We recommend that you migrate your Batch Pipelines to the new Batch-2.0.0 run-time environment to utilize the latest functionality and improvements. For more details about migrating an existing Batch pipeline to the new Batch-2.0.0 run-time environment, see Migrate Pipeline to new Run-time Environment.

3

OrgID added to Catalog HRN (RoW)

OLP 2.9

November 2019

May 29, 2020

 

Deprecation Summary:

Referencing catalogs by old HRN format without OrgID OR by CatalogID will continue to work for (6) months until May 29, 2020.

4

OrgID added to Catalog HRN (China)

OLP 2.10

December 2019

June 30, 2020

 

Deprecation Summary:

Referencing catalogs by old HRN format without OrgID OR by CatalogID will continue to work for (6) months until June 30, 2020. After June 30, 2020, it is important to note that Data Client Library versions 0.1.394 and lower will stop working.

5

OrgID added to Schema HRN (Global)

OLP 2.10

December 2019

June 30, 2020

 

Deprecation Summary:

References to pre-existing schemas created before this feature release will work with either new or old HRNs so those references are not impacted. New schemas created as of this feature release will support both HRN constructs for a period of (6) months or until June 30, 2020. After this time, only the new HRN construct will be supported.

6

Data Client Library data compression behavior

OLP 2.10

December 2019

June 30, 2020

 

Deprecation Summary:

As of June 30, 2020, all data streaming related workflows must be updated to use at least SDK version 2.9.  The Data Client Library behavior as it relates to compressing and decompressing streaming data will strictly adhere to stream layer configuration settings.

 

Jeff Henning

Have your say

Sign up for our newsletter

Why sign up:

  • Latest offers and discounts
  • Tailored content delivered weekly
  • Exclusive events
  • One click to unsubscribe