Integrations are needed in order to let your data flow in and out of the platform. Most importantly you want to make sure your sensor data continuously flows into the platform. To realize this you can use our built-in connectors for a range of commonly used technologies mentioned below.

Additionally you may have other systems, like a work order management system, that you want to integrate. To achieve this you can make use of our HTTP (REST) API or receive events in real-time via webhooks.

Sending measurements

There are multiple ways of sending measurements. Measurements can be sent in batches over HTTP, can be streamed over MQTT or published over CoAP. Another option is to install the agent in your own environment which can read several commonly used protocols (like OPC, BACnet, Modbus or I²C). Before we dive into the different protocols, we first like to make an important note about measurement time to take in mind when setting up an integration.

Important note about measurement time

We discourage rewriting history as it can become hard to trace why something led to an event. In order to achieve this we recommend that subjects have a strict increasing time order over measurements being sent to the platform. On a series level we go one step further and enforce this time order. We also disallow sending measurements with a future timestamp as timestamps in the distant future could effectively render a series unusable as history can never be rewritten.

To be specific, a measurement is marked invalid when:

  • The timestamp is identical (exact same timestamp in milliseconds) within a series.
  • The timestamp is older compared to the latest measurement within a series.
  • The timestamp is more than an hour in the future.

These best practices and requirements prevent scenarios where old measurements coming in would be able to invalidate an event. As an event could already led to e-mails and webhook calls, invalidating would become highly impractical and hard to follow.

The following example of messages describes how this works in practice:

Message order Series Measurement timestamp Validity
1 Temperature of subject #1 2019-05-11, 09:15 Valid
2 Temperature of subject #1 2019-05-11, 09:17 Valid
3 Temperature of subject #1 2019-05-11, 09:16 Invalid
4 Humidity of subject #1 2019-05-11, 09:15 Valid, but not recommended
5 Temperature of subject #1 2019-05-11, 09:18 Valid

Important to note is that in the example each measurement is sent as a separate message. Better is to make smart use of batching measurements. Within a single message the platform fixes the sorting for you. In this way, when you batch the measurements of all metrics for each subject and one or multiple timestamps, you are able to achieve strict increasing time order per subject. In the example this would mean that at least the first and the fourth message would be batched in the same message.

The platform filters out the invalid measurements but invalid measurement will count for your usage statistics. Make sure you avoid sending invalid measurements to prevent unpleasant surprises in your usage details.


Batches of up to 500 measurements can be send via our HTTP API. To map measurements to subjects and metrics you can specify your own IDs. We call these ingestion IDs, by default these are derived from the subjects’ external IDs and metrics’ external IDs (e.g. subjectExternalId$metricExternalId) but you can also override these with custom ones. More information can be found in the API documentation.


MQTT is a reliable, lightweight protocol with a small code and network bandwidth footprint. The platform provides a native integration to publish measurements over MQTT. In order to connect a MQTT client the following configuration can be used:

Property Value
Broker host
Port number 8883 (default secure MQTT port)
Protocol Only TLS version 1.2 connections are accepted.
Username The public ID of your access token.
Password The secret of your access token.
Client ID Must be prefixed with the public ID of your access token. Only one connection is allowed per client ID, when a connection is attempted with another open connection it is refused until the open connection is properly closed or times out after five seconds.
Topic v1/projects/<YOUR_PROJECT_ID>/measurements. Note: Your project ID can be found in the URL when you open your project in the web app, e.g.

Messages need to be a JSON UTF-8 encoded string and need to contain the ingestion ID, either derived from the subject’s external ID and metric’s external ID (e.g. subjectExternalId$metricExternalId) or a custom one that has been defined. Additionally, it needs to contain one or more measurements with the date (as epoch timestamp in milliseconds or as ISO 8601 date string) and a number or location value. Providing a date is optional and the maximum supported date is 9999-12-31 23:59:59.999 UTC. If no date is provided the receive time of the server will be used. This is an example of a valid message:

  "series": [
      "ingestionId": "MyHouse$LivingRoomTemperature",
      "measurements": [
          "date": "2019-04-09T12:00:00.000+00:00",
          "number": 20.3
      "ingestionId": "MyCar$Location",
      "measurements": [
          "date": 1563049273812,
          "location": {
            "lat": 51.9250768,
            "lon": 4.4735718,
            "alt": 12.1


The Constrained Application Protocol (CoAP) integration is Beta. CoAP is a simple protocol with low overhead, designed for devices with constrained resources such as wireless sensors operating on a battery. The Blockbax Platform provides a native integration to publish measurements over CoAP. To connect your CoAP device the following configuration can be used:

Property Value
CoAP server URL coaps://
Port number 5684 (default CoAP port)
Protocol Only DTLS version 1.2 connections are accepted. Note: DTLS is based on TLS 1.2 but uses UDP instead of TCP. No certificates or pre-shared keys need to be configured for the client. We use one-way authentication using X509 certificates.
URL Path /v0/projects/<YOUR_PROJECT_ID>/measurements. Note: Your project ID can be found in the URL when you open your project in the web app, e.g.
Request method POST
Content type application/json
Query parameters apiKey=<ACCESS_TOKEN_SECRET>. Note: Your ACCESS_TOKEN_SECRET is the secret of your access token.

The body of the POST request is identital to the HTTP API. An example can be found in the HTTP API documentation.

An example of a valid request using the libcoap CLI client looks like as follows:

coap-client -m post -t "application/json" -f ./measurements.json "coaps://"

The Things Network

The Things Network provides an open-source network where LoRaWAN devices can be connected. The Blockbax Platform provides a native integration with The Things Network, such that your devices can publish measurements to your Blockbax project. To connect a device on The Things Network take the following steps:

  1. Register your device. Note: Make sure your Blockbax project contains subjects with an external IDs matching the device IDs.

  2. In the top menu of your application select ‘Payload Formats’

  3. From the dropdown menu ‘Payload Format’ select ‘Custom’

  4. Add a ‘decoder’ to unravel your device’s byte payload and return a Javascript object containing the numeric properties you want to send

  5. Add a ‘converter’ according to the template below. Note: Make sure your Blockbax subject type contains metrics with matching external IDs. The server received time is used as measurement timestamp (the field metadata.time from the Uplink JSON).

    function Converter(decoded, port) {
      return {
        // For numeric metrics
        <BLOCKBAX_EXTERNAL_METRIC_ID>: decoded.exampleNumber,
        // For location metrics
            lat: decoded.exampleLatitude,
            lon: decoded.exampleLongitude
  6. Now we will setup the HTTP integration. In the top menu select ‘Integrations’.

  7. Click on ‘add integration’

  8. Select the HTTP integration and fill the following fields:

    Field Value
    Process Id Your own identifier
    Access Key Your The Things Network access key used for downlink
    URL<YOUR_PROJECT_ID>/integrations/53c06128-ce26-4dcc-8d74-5c45b302b525/measurements Note: Your project ID can be found in the URL when you open your project in the web app, e.g.
    Method POST
    Authorization ApiKey <ACCESS_TOKEN_SECRET> Note: Your ACCESS_TOKEN_SECRET is the secret of your access token.

Once you’ve added the integration, check your Blockbax project to see the measurements coming in.

Edge processing

Our APIs are architected using open standards and protocols which can be used with any type of hardware and software. In order to help you connect, configure and control your edge devices a variation of tools is available. A widely used low-code open-source tool is Node-RED for which there is a default Blockbax integration.

Node-RED integration Beta

Node-RED is a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the palette that can be deployed to its runtime with a single-click. You can find more information about using Node-RED here. Here is an example of how Node-RED can be used to setup a OPC UA client and connect incoming data to Blockbax.

setting up OPC UA

After you have installed Node-RED the Blockbax nodes can be easily installed using the palette manager. More information, e.g. about how to install manually, can be found in the library documentation. Here is how to install the nodes:

Installing the Blockbax nodes with the palette manager

There are a number of different nodes available. The first node can be used to parse measurement data to a common format which can be used by the other nodes. Once this is done, you can use nodes to aggregate the resulting measurements and send this data to our platform.


The Parse node can take a msg from another node and create a new object that can be used by the other Blockbax nodes. You can configure what message properties represent the date, the value and the ingestion ID. There is also an option to generate a timestamp on input if the message has no timestamp property. The Parse node adds a series property to the message and sends it on input.

Here is how to setup a Parse node:

setting up parse node

The Parse node has three configurable properties, excluding the Name property.

  • Ingestion ID: Can either be a static Ingestion ID or a dynamic Ingestion ID by changing the type to msg.
  • Date: Can either be a message property or it can be set to use the current date.
  • Value: Measurement value. Usually values are stored in the msg.payload property.

The Aggregate node applies an aggregate function over a a specified time period. The available functions are:

Here is how to setup an Aggregate node.

setting up aggregate node

The Aggregate node has two configurable properties, excluding the Name property.

  • Function : Aggregation function to apply.
  • Period : Time period to aggregate over.

The Send node sends batches of measurements upon a specified time interval. The minimal frequency at which data can be sent is 1 second. The available send options are:

  • HTTP: Recommended if irregular batches of data are being collected.
  • MQTT: Recommended if the incoming data is a frequent and regular stream of data points.

Here is how to setup a Send node.

setting up send node

The Send node has three configurable properties, excluding the Name property.

  • Project: Configuration of your project ID and access token information. You can also check the ‘Use environment variables’ checkbox to use the environment variables PROJECT_ID, ACCESS_TOKEN_PUBLIC_ID and ACCESS_TOKEN_SECRET instead.
  • Type: Type of connection used.
  • Batch Period: Period for which measurements are batched before they are being send.

Connecting your apps

You may have other systems, like a work order management system, that you want to integrate. To achieve this you can make use of our HTTP API. Additionally webhooks can be used to receive events in real-time.


The main Blockbax API is a HTTP API which provides programmatic access to all the platforms operations. This part of the documentation is described in a a different section and requires knowledge of HTTP API concepts. In the case of our API this means the URLs are resource-oriented and HTTP status codes are used to indicate success or failure. Data returned from our API will be in JSON format for all requests.

More info can be found in the separate API documentation.


Webhooks are developed to send events to your application(s) in real-time. You can see it as a reversed API: provide your endpoint and the platform makes sure you get the events you are interested in once they occur. The configuration possibilities via the web client can be found in the project settings section of this documentation. Another option is to configure webhooks programmatically via our HTTP API. The message coming from the webhook looks like this:

  "sequenceNumber": 25618,
  "deliveryAttempt": 1,
  "eventId": "06811617-4836-48d1-a09b-42e624db9ekg",
  "eventTriggerId": "cb99d792-e994-4a18-bf75-5d39f09d7ekg",
  "eventTriggerVersion": 10,
  "eventLevel": "WARNING",
  "subjectId": "ad55bc6a-094c-4f6b-9cfe-871168cfeekg",
  "date": "2019-09-17T09:33:15.075+0000"
Retry logic

In case a call fails (due to receiving a non-2xx response code or exceeding timeout of 5 seconds), we will try 4 more times: after 5 seconds, 1 minute, 5 minutes and 15 minutes. If it still fails for each of those attempts, it is counted as a non-successful delivery. To make this transparent to the receiving system all messages contain the field deliveryAttempt which can be used to monitor at which delivery attempt the call was received. If no retries were needed deliveryAttempt is equal to 1. Retrieving calls that failed, for example to reprocess them after a long period of downtime, can be done programmatically via our HTTP API.

Ordering guarantees

Date ordering is guaranteed for all events related to a specific event trigger and subject. We enforce that calls are made in the right order, however the order of receiving can differ due to network problems and the asynchronous nature of the platform. To make this transparent to the receiving system all messages contain an incrementing sequenceNumber which can be used to derive the order in which messages were sent.