OLP Release Announcements

Open Location Platform 2.1 release

By HERE Technologies team | 21 December 2018

Pipelines

Changed

  • Reduced the total processing downtime when upgrading a stream pipeline to a new pipeline version to under one minute.
  • Increased the granularity of the Pipeline I/O, CPU, and Memory charts in the OLP Current Usage Dashboard.

Fixed

  • Fixed an issue where the pipeline job's state was null upon activating a pipeline version.

Known Issues

Issue

A pipeline failure or exception can sometimes take several minutes to respond.


Issue

Pipelines are not private to a user by default; they must be shared with exactly one group when created.

Workaround

Share the pipeline with a group containing only one user.


Issue

Pipelines can only be shared with users who belong to the same group.

Workaround

Share the pipeline with a group containing the specific set of users.


Issue

If multiple pipelines consuming data from a single stream layer all belong to the same group (pipeline permissions are managed via a group), then each of those pipelines will only receive a subset of the messages from the stream. This is due to the fact that the pipelines share the same Application ID.

Workaround

Use the Data Library to configure your pipelines to consume from a single stream. If your pipelines/applications use the Direct Kafka connector type, you can specify a Kafka consumer group ID per pipeline/application. If the Kafka consumer group IDs are unique, the pipelines/applications will consume all messages from the stream.

If your pipelines use the HTTP connector type, we recommend that you create a new group for each pipeline/application, each with its own application ID.

Data

Added

  • Added features that enable you to index and store metadata and data in a way that is optimized for batch processing. The new features are:

    • Index Layer - A new layer type called "index" is, as its name suggests, an index of the layer’s data. The index has up to three user-defined attributes plus the required time attribute. For example, if you want to run a batch process daily to find all pothole detection events recorded that day in the area surrounding a given city, you can use an index layer to index the pothole detection events by event time, event type, and location, and then archive the data. You can then query the data every 24 hours for such events in that area as part of your batch process.
    • Data Archiving Library - The Data Archiving Library is a new Java library in the OLP SDK that you can use to develop a pipeline that indexes data. The SDK includes a JAR file that serves as a template for an indexing pipeline.
    • Index API - This new REST API can be used to query the index layer to find data that meets the query criteria. The API returns the data handle of the matching data which you can then use to get the data associated with each data handle.

    For more information, see the Data API Developer Guide and the [Data Archive Library Developer Guide](https://developer.here.com/olp/documentation/data-archiving-library/dev_guide/index.html.

  • Added support for specifying the digest algorithm used to calculate the checksums for each partition in versioned and volatile layers. This feature supports data integrity use cases where you need to ensure data has not changed during storage or transmission. You can create checksums in your own pipelines that publish to a volatile or versioned layer, then validate data integrity by comparing a checksum produced at a later time. This integrity check helps you meet standard data security compliance requirements driven by standards like ASIL.

    This feature is supported for new layers only. At this time you cannot specify a digest for existing layers.

    For more information, see the Data API Developer Guide topics Versioned Layer Settings and Volatile Layer Settings.

  • Added an uptime SLA of 98.5% for reading from stream layers, writing to stream layers, and writing to versioned layers.

  • Added the operational status of Portal and data service read and write operations to status.openlocation.here.com.

Changed

  • Improved performance for stream layers:
    • Faster acknowledgements when you send data to the Ingest API.
    • Faster responses to read requests via the Data Client using either the HTTP connect or the direct Kafka connector
    • Faster data writes and reads when using pipelines in OLP.

Known Issues

Issue

When creating an Index Layer and processing stream data in a data archive pipeline, it is possible to select Parquet as a data format. However, a corresponding Spark Connector is not available with this release, making it non-trivial to consume data in this format. As a mitigation, we recommend you not use Parquet format with this release.


Issue

Catalogs not associated with a realm are not visible in OLP.


Issue

Some older catalogs cannot be shared temporarily because the catalog owners don't have permissions assigned to them yet. This issue only applies to older catalogs. Contact us to report each occurrence as a bug.


Issue

Data encryption is limited to:

  • Versioned data at rest

  • Stream layer data

  • Index layer data
  • Notebooks

In-flight data encryption is not consistently implemented across OLP.

You should not send sensitive data or personal information to OLP at this time.


Issue

When you use the Data API or Data Library to create a catalog or layer, the app credentials used do not automatically enable the user who created those credentials to discover, read, write, manage and share those catalogs and layers.

Workaround

After the catalog is created, use the app credentials to enable sharing with the user who created the app credentials. You can also share the catalog withother users, apps and groups.


Notebooks

Known Issues

Issue

Notebooks only support Python 3.


Issue

Neither platform.here.com nor Notebooks are compatible with Internet Explorer 11.


Issue

Notebooks do not support Flink.


Issue

Notebooks do not contain support for Stream Layers.


Account & Permissions

Added

  • Added support for avatar images. Add your avatar by editing your HERE profile on the profile page.

Known Issues

Issue

When updating permissions, it can take up to an hour for changes to take effect.


Issue

A finite number of access tokens (~250) are available for each app or user. Depending on the number of resources included, this number may be smaller.

Workaround

Create a new app or user if you reach the limitation.


Issue

[Pipelines] Only a finite number of permissions are allowed for each app or user in the system across all services. It will be reduced depending on the inclusion of resources and types of permissions. Delete pipelines/pipeline templates to recover space.


Issue

[Pipelines] All users and apps in a group are granted permissions to perform all actions on any pipeline associated with that group. There is no
support for users or apps with limited permissions. For example, you cannot have a reduced role that can only view pipeline status, but not start and stop a pipeline. Limit the users in a pipeline's group to only those users who should have full control over the pipeline.

HERE Optimized Map for Location Libraries

HERE Optimized Map for Visualization v1

Deprecated

  • The Optimized Map for Visualization v1 catalog has been deprecated and will not be updated. It will be retired on May 31, 2019. We recommend that you to start using the catalog Optimized Map for Visualization v2 (hrn:here:data:::here-optimized-map-for-visualization-2). The main difference between the two catalogs is in the data schema that is used for the vector tiles. The catalog now follows the open TileZen schema. Minor changes have been made to the content, including more building footprints and more carto attributes.

If you have been using the Visualization Library that is bundled with the SDK, we recommend that you update to the latest version which will automatically point to Optimized Map for Visualization v2.

If you were using Optimized Map for Visualization with a map renderer of your choice, then you may need to make some adaptations to your map styles.

Marketplace

Fixed

  • Fixed an issue where Marketplace Consumer managers cannot revoke data access once it is granted to internal users.
  • Fixed an issue where an info request record for a semi-private data listing would disappear from the Marketplace Consumer Request tab after the request was accepted by the provider.
  • When a Marketplace Consumer manager shares their licensed data catalog with their internal users, there could be a delay before the internal users can see the data catalog in their data area.

Known Issues

Issue

When a Marketplace Provider grants data access to a Marketplace Consumer, there could be a delay before the Consumer can see the data catalog in their licensed data area.

Workaround

Contact technical support if the Consumer Management Group cannot see the licensed data after more than 2 hours has passed.


Issue

Marketplace users do not receive stream data usage metrics when reading data from Kafka direct.

Workaround

You must write data to a stream layer using the Ingest REST API in order to receive usage metrics.

You must use the Data Library configured to use the HTTP connector type in order to receive usage metrics and read data from a stream layer.


Issue

When the Splunk server gets busy, it is possible for the server to lose usage metrics.

Workaround

If you suspect you are losing usage metrics, contact technical support as soon as possible. We may be able to help rerun queries and validate data.


Issue

If you are a Workspace user and have pipelines that use the Kafka Direct connector to connect to stream data, no messages-in and bytes-in metrics can be collected.

Workaround

You can instrument your pipelines with custom messages-in and bytes-in metrics. Contact technical support if you need assistance.


Issue

After a Catalog is marked "Marketplace Ready", the Catalog may take up to 2 hours to show in the Provider Management Group's list of Catalogs.

Workaround

Contact technical support if the Provider Management Group cannot see the Marketplace Ready catalog after more than 2 hours has passed.

Monitoring and Alerts

Added

  • [Data] Added Index Layer metrics to the Data, Catalog and Layer Metrics dashboard so you can review your usage of this new layer type.

For more information, see Data, Catalog and Layer Metrics

  • [Data] Improved the Ingestion Metrics dashboard to provide more robust and helpful monitoring of your OLP data ingestion. Note that the new dashboard does still include a number of SDIP (legacy ingestion mechanism) metrics to help early OLP adopters see metrics for both that system and OLP ingestion while migration from legacy systems to OLP is still in progress.

For more information, see: Ingestion Metrics

  • Added Data Service IO metrics to the Current Usage Metrics dashboard in Grafana. These metrics allow you to view and monitor data transfer (IO) to and from the platform

For more information, see: Current Usage Metrics

Known Issues

Issue

When the Splunk server gets busy, it is possible for the server to lose usage metrics. If you suspect that you are losing usage metrics, contact technical support as soon as possible. We may be able to help rerun queries and validate data.


Issue

Compute, Storage, and Transfer metrics are not fully available within Monitoring and Alerts.


Issue

[Pipelines] Any changes made by the user to the Pipeline Status Dashboard will be lost when updates to the dashboard are published in future releases.

Workaround

Duplicate the dashboard or create a new dashboard.

Web & Portal

Added

  • Added an integrated tool for customer support. Now you can submit and edit tickets and check their status. To check out the new support tool, click Contact us in the support panel on platform.here.com.
  • Added a knowledge base where you can search frequently-asked questions and suggest new topics. To check out the new knowledge base click Contact us in the support panel on platform.here.com.
  • The set up process for a new organization on OLP has been automated. This once took a week, and now takes hours.

Changed

  • In usage reporting, the Cloud Service "Versioned Storage" has been renamed to "Blob Storage”. Data (not metadata) for both versioned layers and the newly-launched index layers is stored in blob storage and accounted for in the Blob Storage line item in usage reporting.
  • Moved the FAQ content that was previously available on openlocation.here.com to the new knowledge base.

Fixed

  • [Pipelines] Fixed an issue where the system forgets the number of executors, run-time environment, and entry point class name when a new JAR is uploaded to create a pipeline version.
  • [Pipelines] Fixed an issue where the option to remove a pipeline version was available even when it was being activated.
  • [Pipelines] Fixed an issue where a copy of an existing batch pipeline version could use the same catalog for both input and output.
  • [Pipelines] Fixed an issue where a second pipeline version shows a job or operation failure if it was activated using the CLI, even though another pipeline version for that pipeline is already active.

Removed

Known Issues

Issue

Some PDF documentation has formatting errors.


Issue

The Portal and notebooks are not compatible with Internet Explorer 11.


Issue

[Pipelines] The pipelines list page is sometimes slow to load.


Issue

[Pipelines] The Portal can't be used to delete pipeline templates.

Workaround

Use the CLI or API to delete pipeline templates.


Issue

[Pipelines] The Portal can't be used to force a batch pipeline version to run. It will run on its own when the input catalogs' version is updated.

Workaround

Use the CLI or API to force a batch pipeline version to run, or wait for the input catalogs' version to update. For more information, see the Troubleshooting section in the Pipelines Developer Guide.


Issue

[Pipelines] The custom runtime configuration for a pipeline version has a limit of 64 characters for the property name, and a limit of 255 characters for the value.

Workaround

For the property value, there is no workaround. For the property name, you can define a shorter name in the configuration and map that to the actual, longer name within the pipeline code.


Issue

[Pipelines] In the Portal, new jobs and operations are not automatically added to the list of jobs and operations for a pipeline version while the list is open for viewing.

Workaround

Refresh the Jobs and Operations pages to see the latest job or operation in the list.


Issue

[Data] Data visualization in the Portal is only available for versioned and volatile layers.

Workaround

Create your own visualization for your own schemas creating a GeoJSON renderer and making it part of that schema. Out of the box, however, only data can be visualized that is formatted according to the following schema formats:

  • GeoJSON
  • HERE Reality Index Topology
  • HERE Reality Index Building Footprints
  • HERE Reality Index Cartography
  • HERE Traffic Flow
  • SDII