OLP Release Announcements

Open Location Platform 2.3 Release

By HERE Technologies team | 26 February 2019

Highlights

Improvements to running Batch Pipelines on-demand

As a Pipeline developer, you can now quickly iterate on your Pipeline logic by running Batch Pipelines on-demand, without waiting for the up-stream catalogs to change or making dummy updates to the catalogs. In addition, you can save time and effort by not having to provide the parameters that were previously required, including input and output Catalog versions, and processing type; these parameters are now optional.

If these parameters are not provided, the Batch Pipeline will run using the latest input and output Catalog versions, and reprocess all input data.

NOTE: This will not allow you to test Batch Pipelines that perform incremental processing, as these workloads require meaningful changes to the input catalogs. The best way to test Pipelines that perform incremental processing is still to trigger changes to the input Data Catalogs.

NOTE: To run a Batch Pipeline on-demand that uses an Index Layer in the input or output Catalogs, a dummy version of that Catalog and processing type is needed. See the CLI documentation for more details.

Save and edit draft Marketplace Listings

As a Marketplace Provider, when preparing a Listing, you can now save an incomplete Listing as a draft, and then resume editing later. This also lets others in your Org to make edits to the draft before finally publishing to the Marketplace.

With this, teams can work collaboratively on Listings, ensuring all of your subject matter experts can weigh in, provide the right information, and review works in progress.

NOTE: Users must have Marketplace provider management access to create and edit Listings. Contact your account admin to verify if you have access.

Grant permission to invite new users to the Org and Groups

A new Org Inviter role has been added, giving selected users the permission to invite other users to the Org. Only an Org Admin can grant the permission to invite other users.

This capability reduces the burden on organization level administrators to invite all users, and reduces the number of users in the organization that need to be Org Admins.

If the user receiving the new permission (the "Org Inviter") is also a Group Admin, they have the ability to add the user directly to their Group.

Develop visualization plugins using online Javascript editor

The Data Inspector on the Inspect tab now features a rich online JavaScript editor, simplifying the development and testing of custom renderer plugins from the OLP Portal, and making it easier to enable other users in your Org to visualize data encoded by your custom schema.

With this editor, you can write, fine-tune and then run your plugin code. The plugin converts your data into GeoJSON, which then renders your data on the base map. WIth this, you and others can visualize your custom data and more easily inspect results of your Pipelines.

The plugin code can be downloaded for local development and visualization, or for publication to OLP through the Schema archetype so that others can also visualize your data.

Pipelines

Changed

  • A scheduled batch pipeline version will not convert to an on-demand run, even if the input and output catalog versions, or the type of processing, are specified.

  • When running batch pipelines on-demand, parameters that were previously required are now optional, including input and output catalog versions, and processing type. If these parameters are not provided, the batch pipeline will run using the latest input and output catalog versions, and reprocess all input data. With this change, you no longer have to wait for the input catalogs to change, or force updates to the input catalogs in order to tigger an immediate execution of the batch pipeline.

Fixed

  • Fixed an issue where the user couldn't specify array values for pipeline run-time configurations due to a parameter limitation.

  • Fixed an issue where the Pipeline service marked a job as Failed at the same time as marking the pipeline version as Ready.

  • Fixed an issue where a canceled job was stuck in a Running state.

  • Fixed an issue where a Pipeline's Elapsed Time was different than the CPU consumption interval in Grafana.

Known Issues

Issue

A scheduled Pipeline Version cannot be forced to run on-demand.

Workaround

Create a copy of the pipeline version and run the new pipeline version on-demand.


Issue

A pipeline failure or exception can sometimes take several minutes to respond.


Issue

To run a Batch Pipeline on-demand that uses an Index Layer in the input or output Catalogs, a dummy version of that Catalog and processing type is needed. See the CLI documentation for more details.


Issue

Pipelines belonging to the same group also share the same application ID. If multiple pipelines in the same group consume data from the same stream layer, then each pipeline only receives a subset of the messages from the input stream.

Workaround

Configure your Pipelines to use the Direct Kafka connector type in the Data Client Library, and then specify a unique Kafka consumer group ID for each pipeline

If your pipelines use the http connector type, you can create a new group for each pipeline; each will receive its own application ID.


Issue

A Pipeline Version can be activated even after an input catalog that it uses is deleted.

Workaround

The pipeline will fail when it starts running and will show the error message about missing catalog. Re-check the missing catalog or use a different catalog.


Issue

A Pipeline Version, created and run instantly, cannot be scheduled.

Workaround

Create a copy of the Pipeline Version and put the new pipeline version in schedule.


Issue

Only a finite number of permissions is allowed for each app or user in the system across all services. It will be reduced depending on the inclusion of resources and long names of actions defined.

Workaround

Delete Pipelines/Users to recover space.

Data

Known Issues

Issue

Catalogs not associated with a realm are not visible in OLP.


Issue

Data encryption is limited to:

  • Versioned data at rest
  • Stream layer data
  • Index layer data
  • Notebooks

    In-flight data encryption is not consistently implemented across OLP.

    You should not send sensitive data or personal information to OLP at this time.


Issue

Visualization of Index Layer data is not yet supported.


Issue

When you use the Data API or Data Library to create a catalog or layer, the app credentials used do not automatically enable the user who created those credentials to discover, read, write, manage and share those catalogs and layers.

Workaround

After the catalog is created, use the app credentials to enable sharing with the user who created the app credentials. You can also share the catalog with other users, apps and groups.


Issue

When creating an Index Layer and processing stream data in a data archive pipeline, it is possible to select Parquet as a data format. However, a corresponding Spark Connector is not available with this release, making it non-trivial to consume data in this format. As a mitigation, we recommend you not use Parquet format with this release.


Issue

Some older catalogs cannot be shared temporarily because the catalog owners don't have permissions assigned to them yet. This issue only applies to older catalogs. Contact us to report each occurrence as a bug.

Notebooks

Known Issues

Issue

Notebooks do not support Flink.


Issue

Notebooks do not contain support for Stream Layers.

Account & Permissions

Added

  • Any user can now be granted permission to invite other users to the Org, reducing the burden on the organization-level Admin to invite all users.

If the user receiving the new permission (the "Org Inviter") is also a Group Admin, they have the ability to add the user directly to their Group.

Known Issues

Issue

A finite number of access tokens (~ 250) are available for each app or user. Depending on the number of resources included, this number may be smaller.

Workaround

Create a new app or user if you reach the limitation.


Issue

Only a finite number of permissions are allowed for each app or user in the system across all services. It will be reduced depending on the inclusion of resources and types of permissions. Delete pipelines/pipeline templates to recover space.


Issue

All users and apps in a group are granted permissions to perform all actions on any pipeline associated with that group. There is no support for users or apps with limited permissions. For example, you cannot have a reduced role that can only view pipeline status, but not start and stop a pipeline. Limit the users in a pipeline's group to only those users who should have full control over the pipeline.


Issue

When updating permissions, it can take up to an hour for changes to take effect.

Marketplace

Added

  • Added feature for save draft Listing. When preparing a Marketplace Listing, users can now save the Listing as draft and let others in their organization make edits before publishing to the Marketplace.

The feature enables teams to work collaboratively when preparing to publish a Marketplace Listing, which can improve Listing quality and compliance by leveraging subject matter experts in their own organization.

NOTE: The feature is available to users who have Marketplace provider management access. Contact your account admin to verify if you have access.

Fixed

  • Resolved the issue where when a Marketplace Provider grants data access to a Marketplace Consumer, there could be a delay before the Consumer sees the data catalog in their licensed data area, because the confirmation email is only sent after the consumer has accepted the subscription.

Known Issues

Issue

If you are a Workspace user and have pipelines that use the Kafka Direct connector to connect to stream data, no messages-in and bytes-in metrics can be collected.

Workaround

You can instrument your pipelines with custom messages-in and bytes-in metrics. Contact technical support if you need assistance.


Issue

When the Splunk server is busy, the server can lose usage metrics.

Workaround

If you suspect you are losing usage metrics, contact HERE technical support. We may be able to help rerun queries and validate data.

Web & Portal

Added

  • Run a batch pipeline Version on-demand, withouth providing input and output Catalog versions or processing times.

  • Added new functionality that simplifies the development and testing of custom renderer plugins from the OLP Portal. The Data Inspector on the Inspect tab now features a rich online JavaScript editor. This allows you to write, fine-tune and run your plugin code to convert your OLP data into GeoJSON, visualize your data on the base map and review the results. The plugin code can be downloaded for local usage or for publication to OLP through the Schema archetype.

Known Issues

Issue

Data visualization in the Portal is only available for versioned and volatile layers.


Issue

Data can be only be visualized if it is formatted according to the following schema formats:

  • GeoJSON
  • HERE Reality Index Topology
  • HERE Reality Index Building Footprints
  • HERE Reality Index Cartography
  • HERE Traffic Flow
  • SDII

Workaround: Create a visualization for your schemas using a GeoJSON renderer.


Issue

The Portal and notebooks are not compatible with Internet Explorer 11.


Issue

The pipelines list page is sometimes slow to load.


Issue

The custom runtime configuration for a pipeline version has a limit of 64 characters for the property name, and a limit of 255 characters for the value.

Workaround

For the property value, there is no workaround. For the property name, you can define a shorter name in the configuration and map that to the actual, longer name within the pipeline code.


Issue

A Batch Pipeline Version using an Index Layer in the input or output Catalogs doesn't run on-demand from the Portal. It gets stuck in the Scheduled state.

Workaround

Use the CLI or API to run such a Batch Pipeline on-demand by specifying dummy version of the Catalogs with the Index Layer.


Issue

The Portal can't be used to delete pipeline templates.

Workaround

Use the CLI or API to delete pipeline templates.


Issue

In the Portal, new jobs and operations are not automatically added to the list of jobs and operations for a pipeline version while the list is open for viewing.

Workaround

Refresh the Jobs and Operations pages to see the latest job or operation in the list.


Issue

Some PDF documentation has formatting errors.