Introducing Warp 10 3.0!

We have just released the first Alpha version of Warp 10 3.0. This article explains what changes and how you can easily test it.

Introducing Warp 10 3.0

When we released Warp 10 2.11 last August, we announced that we would be focusing on the next major release. Some seven months after this announcement, we are very happy to have released the first Alpha version of Warp 10 3.0. This first alpha release is available on GitHub, and this article will give you all the information you need to test-drive it.

Introduction

Keep in mind this is an alpha release and as such is not meant for production use.

Our intent is to release new alpha versions weekly until we reach stability, with the hope of releasing the first beta release around mid-May. We will then give around six weeks to stabilize the beta and aim for a GA (General Availability) release of Warp 10 3.0 at the end of June or the beginning of July. The goal is to have a first release in time for the summer so you can use that time of slow activity to migrate your systems.

We count on your active testing of these alpha and beta releases so we can track down any remaining issues.

What's new

If you have attended the Ask Me Anything session we held at the beginning of March you already know what we packed into release 3.0 but let me recap the main changes.

Warp 10 is now compatible with Java 8 and above, so if your policy is to use the most recent LTS (Java 17 at the time of writing), you can now do so. If you want to continue using JDK 8 this is also perfectly fine. We will make sure the next LTS (JDK 21) is supported shortly after it comes out next September.

On the security front, we replaced the use of secrets with capabilities, and we encourage every extension author to do the same for improved security.

WarpScript underwent a global clean-up with some redundant and deprecated functions being removed, and a lot of improvements which should lead to better performance, especially when working with large series. We also integrated some code which was previously only available with a commercial license.

The Datalog replication mechanism also went through a major overhaul, improving performance and operations of the replication process. The changes are so numerous that a dedicated article will cover them in detail.

Lastly the distributed version of Warp 10 has seen some changes, the first one is the bump in the version of Kafka that Warp 10 supports. The API 1.0+ is now used meaning that any version of Kafka compatible with it could be used, so probably 0.10.0 onwards. Warp 10 itself is built using Kafka 3.4.0.

The second and probably most notable change in the distributed version of Warp 10 is that we stopped using HBase and switched to FoundationDB. This is a change which will require some work to embrace but it was a recurrent request from our users to drop the dependency on anything Hadoop.

The ecosystem around FoundationDB has proven a very solid one with some major players using it for their services, among which Apple, Snowflake, and WaveFront (now part of VMWare). Our experience so far with using FoundationDB for Warp 10 is that it has greatly simplified operations, so this should be a very much appreciated change.

The various deployment options of Warp 10 3.0

Warp 10 3.0 offers multiple deployment options.

The standalone version is the most popular one, where an instance of Warp 10 can be deployed on a single machine or a set of machines with replication among them. This version is backed by LevelDB and can be deployed on hardware with very limited resources but still provide very good performance. It is the deployment of choice for edge systems when you need data collection and analytics capabilities on the IoT devices themselves. The standalone version of Warp 10 is most commonly used when your number of series is below 100 million and your actual data is below 500 billion data points.

At the opposite end of the spectrum, the distributed version of Warp 10 is meant for deployment needs in the billions of series and trillions of data points, with the additional benefit that all components can be scaled horizontally to accompany your growth. This version uses Kafka and FoundationDB so is more demanding in terms of ops.

Historically, moving from the standalone to the distributed version required to dump and re-import the data into the new deployment. This proved complicated in some set-ups when production could not be interrupted.

That is why Warp 10 3.0 brings a new deployment option that we named standalone+. It is a middle ground between the standalone and distributed versions, basically a standalone version but with storage managed by FoundationDB instead of LevelDB. This has two benefits. It can be scaled further than a standalone deployment when it comes to a number of data points. And when a single server can no longer cope with your load, that standalone-like deployment can be morphed into a distributed one by simply adding Kafka and replacing the single server with multiple servers handling the various roles, all without having to migrate data since they are already in FoundationDB. A big win for growing organizations!

Test driving it

The following sections will provide minimal instructions for deploying the various versions of Warp 10 3.0 so that they can be used for testing. Those instructions are not meant to set-up production ready instances of Warp 10, you will need to fine-tune the configuration to adapt it to your specific needs.

Downloading and installing

The first step consists in downloading the version you want to test. As this article is being put together, only 3.0.0-alpha0 is available, but if you read it later you should consider a more recent release, whether alpha, beta, or GA.

Download the .tar.gz of the release you selected and untar it in a directory.

Warp 10 3.0 can be launched by the user of your choice, just make sure the ownership of the expanded archive content is correct.

cd /var/tmp
curl -L -O 'https://github.com/senx/warp10-platform/releases/download/3.0.0-alpha0/warp10-3.0.0-alpha0.tar.gz'
tar zxpvf warp10-3.0.0-alpha0.tar.gz

Initial set up

Now that you have extracted the release from its archive, you can initialize your deployment, that is instruct Warp 10 to create configuration files for your selected version of Warp 10.

Position yourself in the directory created by the extraction and, as the user which performed the extraction, execute the following command:

./bin/warp10.sh init

The list of possible options to the init command will be displayed, inviting you to specify the type of version you want. We will document the standalone, standalone+ and distributed versions in the rest of this article. The in-memory version is identical to the standalone one except for the fact that it stores the data in non-persistent memory.

Standalone

To set up a standalone instance of Warp 10, simply run

./bin/warp10.sh init standalone

This will create the configuration files in a state which is sufficient to run Warp 10, so no other step is required.

Standalone+

To create the configuration files for a standalone+ deployment, simply call

./bin/warp10.sh init standalone+

You then need to ensure that you have a FoundationDB cluster correctly set up and that it is accessible from the Warp 10 instance.

Warp 10 has been tested with versions 7.1.x of FoundationDB, so you need to have a cluster available with at least that version.

The creation of the database for Warp 10 is done via fdbcli with the following command:

configure new single ssd-redwood-1-experimental

Use of the redwood storage engine is highly recommended since it has prefix compression enabled and will therefore lead to a smaller footprint on disk.

Note that a detailed FoundationDB setup is out of the scope of this article, data replication, for example, is set to single in the example above because it is a test instance.

You need to copy the FoundationDB clusterfile, typically /etc/foundationdb/fdb.cluster for your instance to the etc directory of your Warp 10 deployment and ensure the file etc/conf.d/99-init.conf contains the line:

fdb.clusterfile = ${warp10.home}/etc/fdb.cluster

The fdb.cluster file must be writable by the user running the Warp 10 instance.

Lastly you need to ensure that the FoundationDB client library is installed on the machine running Warp 10. Install the foundationdb-clients package for your system. The released packages can be found on GitHub.

Distributed

To generate the configuration files for a distributed deployment execute the following:

./bin/warp10.sh init distributed

Proceed as in the standalone+ case to deploy your FoundationDB instance and copy its cluster file in etc and ensure fdb.clusterfile is correctly defined. Also, make sure the foundationsdb-clients package is installed on machines running Warp 10.

You need to set the configuration key warp10.instance to a short name of your choosing. This will be used in the znode path where Warp 10 will create the znodes it needs.

Once you have set this name, create the following znode in your ZooKeeper ensemble:

/zk/warp/<warp10.instance>

where <warp10.instance> is the value you assigned to warp10.instance in the configuration file 99-init.conf.

Next, you need to create Kafka topics for Warp 10. The following topics need to be created:

data
metadata
runner
throttling

For a test setup, we will not worry about the number of partitions.

Last you need to point your Warp 10 instance to your ZooKeeper and Kafka deployments by setting the keys zk.quorum to the IP:PORT of your ZooKeeper ensemble and the kafka.bootstrap.servers to the comma-separated list of IP:PORT for your Kafka cluster.

Starting Warp 10

Once the configuration of your instance is complete, you can start it by issuing the following command:

./bin/warp10.sh start

Generating tokens

Now that your Warp 10 instance is running, the last step is to generate tokens to interact with it.

This is again done simply using warp10.sh and what is called a token envelope. A demo envelope is stored in tokens/demo-tokengen.mc2 which will generate a set of test tokens valid for 14 days.

Launch the following command:

./bin/warp10.sh tokengen tokens/demo-tokengen.mc2

The output will look like this:

[{
  "ident" : "5693fa0214254580",
  "id" : "DemoWriteToken",
  "token" : "J1rGKHVV4qrRaD_sw74eKkScCKxTdkjAb1Xoa9aRMP6wJ.2cNdM4HdJd6NUrA53iJCofTQyajRR1r2JPKBt8HCngbi63YT.yAV_qCym9nzka2ALGmyU.rk"
},{
  "ident" : "c7fd9683169b05f5",
  "id" : "DemoReadToken",
  "token" : "0x1gjLxVrwJo7Gp8Pr11J79kt2bFkINam9.0mDduKlTPFNub4mViFmIZKXJocMU6Nzek11CRBoi6KOU.NhN33LwEG4Kz.EAU2LhiiWigTn4A4nXqm6yi2IClWgbv20s8MOOgG2n6ePDANAh38Q47r.0MNhZzCY5EVRqVP5zpJDJ"
}]

You can extract the demo read and write tokens and start experimenting with Warp 10 3.0.

Going further

In the coming weeks, we will publish articles about some features of Warp 10 3.0, so stay tuned for more information coming your way.

During your testing of Warp 10 3.0, we encourage you to join and interact with the community on the Warp 10 Lounge.