Skip to main content
Release notes 16 min read

HERE Workspace & Marketplace 2.4 release

HERE Workspace & Marketplace 2.4 release

Highlights

TLS 1.0 and 1.1 Deprecation

We are planning to discontinue support for the severely outdated Transport Layer Security (TLS) versions 1.0 and 1.1 in OLP Services on 30 April 2019. From May 2019 onward, only TLSv1.2 and higher versions will be supported by OLP in accordance with industry-wide best practice.

Scope

This change affects connections from outside OLP to the OLP Portal and all OLP APIs, including:

  • Browser access to the OLP portal website
  • Access to OLP services using the OLP Data Client Library (part of the OLP SDK)
  • Access to OLP services using the OLP Data Visualization Library (part of the OLP SDK)
  • Access to OLP services using the OLP Command Line Interface
  • Access to OLP services (REST APIs) using any 3rd-party components

Impact

This change may be an impact in the following cases:

  • 3rd-party components: If you are accessing OLP APIs using components not supplied by HERE (such as open-source HTTP/REST libraries)
  • Unsupported browsers: If you are accessing the OLP Portal or client-based web applications embedding the OLP Data Visualization library using an unsupported browser

If you are using OLP with HERE components on supported platforms without explicitly disabling TLS v1.2, this change should not have any impact on you. For example:

  • OLP Portal and OLP Data Visualization library running within a browser: All supported browsers will continue to work and have been defaulting to TLSv1.2 connections with the supported cipher suite since the release of OLP.
  • OLP Data Client and Command Line Interface: All versions of OLP data client and CLI deployed on a supported Java runtime environment (Java 8) will continue to work and have been defaulting to TLSv1.2 connections with the supported cipher suite since the initial release of OLP.

Actions you need to take

Ensure the following:

  • You are using HERE components on supported platforms without any explicit downgrades on security features (such as disabling newer TLS versions or supported cipher suites).
  • You are not using network components (such as middleboxes and proxies) that prevent the setup of TLSv1.2 connections.
  • If you are using 3rd-party components, you must ensure that they support TLSv1.2 and the supported cipher suites.

While we continue to honor our commitment to API stability, we strongly recommend timely upgrades to the latest supported versions of all components used to connect to OLP. This ensures the best compatibility and security.

Further Information

The RFC for deprecating TLSv1.0 and TLSv1.1 explains the rationale behind the industry-wide move to disable TLSv1 and TLSv1.1

The following is a list of the TLS Cipher Suites that OLP services will support:

  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
  • TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
  • TLS_RSA_WITH_AES_256_CBC_SHA256
  • TLS_RSA_WITH_AES_256_GCM_SHA384
  • TLS_RSA_WITH_AES_128_CBC_SHA256
  • TLS_RSA_WITH_AES_128_GCM_SHA256

Coverage maps and size heatmaps for layers in the web portal

We added a new feature that allows you to better understand the coverage of your partitions on both high and low zoom levels. The new coverage overlay helps you learn the geographical availability of your data. The new heatmap overlay reflects the geographical distribution of your layer data through partition size-driven color gradients.

You can analyze your data coverage directly in the OLP Portal’s Data Inspector. Or, you can use the corresponding Visualization Library components to enrich your own applications with these data visualizations.

Stream Layer Read API

A new Stream Read API is now available. This removes the limitation of only being able to consume data from stream layers via the Data Client Library.

The same realm can have subscriptions to Marketplace Provider, Consumer, and Workspace

As an Org admin, you can sign up for subscriptions to Marketplace Provider, Consumer, and Workspace for the same realm. If your realm has both Marketplace Provider and Consumer subscriptions, you can grant specific user permissions by adding them to the Provider or Consumer management groups.

As a user, if you have permissions to both Marketplace Provider and Consumer management groups, then you can use the toggle switch in your Marketplace user interface to switch between the Provider and Consumer views.

This capability enables organizations to use all available OLP capabilities within one realm. Organizations can assign specific user permissions for specific needs.

NOTE: To learn how to assign users to Provider and Consumer management groups in your realm, read Marketplace Provider User Guide and Marketplace Consumer User Guide.

Change a Batch Pipeline Version's execution mode between on-demand or scheduled

Pipeline users can now change the mode of execution of a Batch Pipeline Version from On-demand to Scheduled and vice versa during activation. This improvement helps save time because the users no longer need to create a copy of the Batch Pipeline Version to change the mode of execution. The Batch Pipeline Version can now be run on-demand to test the logic, and the same Batch Pipeline Version can then be scheduled to run based on data changes. Similarly, if a scheduled Batch Pipeline Version experiences any issues, then it can be deactivated and run on-demand for troubleshooting.

Monitor custom metrics via Flink Accumulators in Stream Pipelines

Stream Pipelines can now be debugged using custom metrics created using Flink Accumulators. Stream Pipelines support the following type of accumulators: Long and Double. Once created, these accumulators become available as metrics in Grafana to query and create dashboards. The metrics are prefixed with the phrase flink_accumulators_ to help you in identifying the available metrics.

NOTE: Histogram Accumulator or a user-created Custom Flink Accumulators are not supported in Stream Pipelines.

Pipelines

Added

  • Added the option for Pipeline users to change the mode of execution of a Batch Pipeline Version from On-demand to Scheduled and vice versa during activation. This improvement helps save time because the users no longer need to create a copy of the Batch Pipeline Version to change the mode of execution. The Batch Pipeline Version can now be run on-demand to test the logic, and the same Batch Pipeline Version can then be scheduled to run based on data changes. Similarly, if a scheduled Batch Pipeline Version experiences any issues, then it can be deactivated and run on-demand for troubleshooting.

  • Added the ability to debug Stream Pipelines using custom metrics created using Flink Accumulators. Stream Pipelines support the following type of accumulators: Long and Double. Once created, these accumulators become available as metrics in Grafana to query and create dashboards. The metrics are prefixed with the phrase 'flink_accumulator' to help you in identifying the available metrics.

NOTE: Histogram Accumulator or any user-created Custom Flink Accumulators are not supported in Stream Pipelines.

Changed

  • To run a Batch Pipeline on-demand that uses an Index Layer in the input or output Catalogs, a dummy version of that Catalog and processing type is no longer required.

Fixed

  • Fixed an issue where a scheduled Batch Pipeline Version could not be forced to run instantly. See the Added release note entry for Pipelines to learn more about the new improvements.

  • Fixed an issue where a Batch Pipeline Version, created and run instantly, could not be scheduled. See the Added release note entry for Pipelines to learn more about the new improvements.

Known Issues

Issue

Only a finite number of permissions is allowed for each app or user in the system across all services. It will be reduced depending on the inclusion of resources and long names of actions defined.

Workaround

Delete Pipelines/Users to recover space.

Issue

A pipeline failure or exception can sometimes take several minutes to respond.

Issue

Pipeline Status Dashboard in Grafana can be edited by users. Any changes made by the user will be lost when updates are published in future releases. Also, the dashboard will not be available for user edits in a future release.

Workaround

Duplicate the dashboard or create a new dashboard.

Issue

If multiple pipelines consuming data from a single stream layer all belong to the same Group (pipeline permissions are managed via a Group), then each of those pipelines will only receive a subset of the messages from the stream. This is due to the fact that the pipelines share the same Application ID.

Workaround

Using the Data Client Library to configure your pipelines to consume from a single stream: If your pipelines/applications use the Direct Kafka connector type, you can specify a Kafka Consumer Group ID per pipeline/application. If the Kafka consumer group IDs are unique, the pipelines/applications will consume all messages from the stream.

If your pipelines use the http connector type, we recommend you create a new Group for each pipeline/application, each with its own Application ID.

Issue

A Pipeline Version can be activated even after an input catalog that it uses is deleted.

Workaround

The pipeline will fail when it starts running and will show the error message about missing catalog. Re-check the missing catalog or use a different catalog.

Data

Added

  • Added Stream Layer metrics updates to the "Data, Catalog, and Layer Metrics" Grafana dashboard.

  • New tables and graphs for "Write Byte Rate Per Layer (MB/s)" and "Read Byte Rate Per Layer (MB/s)" enable you to see byte rates in and out per stream layer over time, as well as an average and a current summary.

  • The "Throughput Per Stream Layer" metrics have been broken down to show "In" and "Out" numbers separately. Previously this metric was showing the combined value.

  • Added a new Stream Read API. This removes the limitation of only being able to consume data from stream layers via the Data Client Library.

Changed

  • Improved error message experience when a user tries to delete a schema that is associated with a catalog marked "Marketplace Ready".

  • Improved error message experience when a user tries to delete or update a catalog that is associated with a catalog marked "Marketplace Ready".

Removed

  • Data API: Per the six month deprecation notice delivered with the OLP 2.0 release, we are removing the publish v1 API this month (April 2019).

Known Issues

Issue

Catalogs not associated with a realm are not visible in OLP.

Issue

Visualization of Index Layer data is not yet supported.

Issue

When you use the Data API or Data Library to create a Data Catalog or Layer, the app credentials used do not automatically enable the user who created those credentials to discover, read, write, manage, and share those catalogs and layers.

Workaround

After the catalog is created, use the app credentials to enable sharing with the user who created the app credentials. You can also share the catalog with other users, apps, and groups.

Issue

Some older catalogs cannot be shared temporarily because the catalog owners don't have permissions assigned to them yet. This issue only applies to older catalogs. Contact us to report each occurrence as a bug.

Notebooks

Known Issues

Issue

OLP Notebooks cannot be shared with OLP user groups.

Workaround

Notebooks can be shared with one or more individual users by entering each account separately.

Issue

Notebooks do not support analysis of Stream Layers and Index Layers.

Account & Permissions

Known Issues

Issue

A finite number of access tokens (~ 250) are available for each app or user. Depending on the number of resources included, this number may be smaller.

Workaround

Create a new app or user if you reach the limitation.

Issue

Only a finite number of permissions are allowed for each app or user in the system across all services. It will be reduced depending on the inclusion of resources and types of permissions. Delete pipelines/pipeline templates to recover space.

Issue

All users and apps in a group are granted permissions to perform all actions on any pipeline associated with that group. There is no support for users or apps with limited permissions. For example, you cannot have a reduced role that can only view pipeline status, but not start and stop a pipeline. Limit the users in a pipeline's group to only those users who should have full control over the pipeline.

Issue

When updating permissions, it can take up to an hour for changes to take effect.

Marketplace

Added

  • Added a Provider and Consumer toggle button in the Marketplace user interface.

If you have both Marketplace Provider and Consumer access, then you can see a toggle button in the right upper corner of your Marketplace user interface. This allows you to switch between Provider and Consumer view.

Known Issues

Issue

If you are a Workspace user and have pipelines that use the Kafka Direct connector to connect to stream data, no messages-in and bytes-in metrics can be collected.

Workaround

You can instrument your pipelines with custom messages-in and bytes-in metrics. Contact technical support if you need assistance.

Issue

When the Splunk server is busy, the server can lose usage metrics.

Workaround

If you suspect you are losing usage metrics, contact HERE technical support. We may be able to help rerun queries and validate data.

Issue

Added a Provider and Consumer toggle button in the Marketplace user interface.

If you have both Marketplace Provider and Consumer access, then you can see a toggle button in the right upper corner of your Marketplace user interface. This allows you to switch between Provider and Consumer view.

Web & Portal

Changed

  • Moved the option to choose the execution mode of a Batch Pipeline Version from the Pipeline Version Configuration process to the Pipeline Version Activation process, where the execution mode for a Batch Pipeline Version can now be changed between Scheduled and On-demand (Run Now). This saves you time because you no longer need to create a copy of the pipeline version, just to make a change in its mode of execution.

Fixed

  • Fixed an issue where a Batch Pipeline Version using an Index Layer in the input or output Catalogs don't run on-demand from the Portal and gets stuck in the Scheduled state.

Known Issues

Issue

The custom run-time configuration for a Pipeline Version has a limit of 64 characters for the property name, and 255 characters for the value.

Workaround

For the property name, you can define a shorter name in the configuration and map that to the actual, longer name within the pipeline code. For the property value, you must stay within the limitation.

Issue

Pipeline Templates can't be deleted from the Portal UI.

Workaround

Use the CLI or API to delete Pipeline Templates.

Issue

In the Portal, new jobs and operations are not automatically added to the list of jobs and operations for a pipeline version while the list is open for viewing.

Workaround

Refresh the Jobs and Operations pages to see the latest job or operation in the list.

Issue

The Pipelines list page might be slow to load all the elements.

HERE Technologies

HERE Technologies

Have your say

Sign up for our newsletter

Why sign up:

  • Latest offers and discounts
  • Tailored content delivered weekly
  • Exclusive events
  • One click to unsubscribe