Helidon 4 SE

Helidon 4 is a micro-service framework which purports to make our lives slightly better with speed of development coupled with speed of runtime while being light on the mind. Is it any good?

Ο χρόνος είναι ακριβός - time is valuable

New Year, New Framework?

I am quite tired of large Java frameworks that purport to simplify the average backend-writing life but instead add a massive mental model - basically the software equivalent of a very cool and polished ball of mud.

In small micro-services we are often not doing more advanced things than firing off a query, slightly polishing database result sets and mapping them to JSON or some other web-friendly format only to end up spitting it out over some wire protocol (HTTP/queues/events). Helidon 4 promises to make the trickier things simple in Java while still being highly performant and flexible.

Recall Wirth’s Law:

software is getting slower more rapidly than hardware is becoming faster

It is exciting to envision things becoming simpler even in the Java ecosystem. Let’s explore!

A wild micro-services framework appears!

As part of my dislike for oversize frameworks, I’m always on the look out for lighter ones. Helidon 4 aims to be quite close in usage to how it feels to use the regular Java SDK. Further, having been recently updated to be Loom-friendly (virtual threads everywhere!) puts it on the “Ooh shiny”-radar.

It comes in two flavors: Helidon SE and Helidon MP. The first is plain Java, the later is an Eclipse MicroProfile-friendly edition and uses annotations, heavy usage of dependency injection, standard spirit-binding magic, all things belonging to the bag of tricks I think we can usually get away from quite often if we dare – this blog post will focus exclusively on SE.

The project FAQ states:

Helidon SE is based on our own reactive APIs. It uses a functional style of programming with almost no annotations. You’re just calling methods on plain old Java objects (POJOs). No magic!

Great. Despite officially being titled a consulting software wizard, I must confess I vastly prefer the “No magic!” approach to software engineering over declarative XML incantations that spirit together a CRUD service out from unholy XSDs and class hierarchies at an undefined compile-runtime boundary that often explodes when a @BindAnnotationIncantation-behavior changes over a minor version.

Reading the white paper I get the impression that Helidon1 seems to have been an internally dogfooded Oracle-project that was blessed with being open sourced. Cool cool.

Hopefully, it’s fast like its unladen namesake - Nomen est omen2. The Helidon 4 Web server claims to be the world’s first web server written from scratch to exploit virtual threads. This plus a large degree of GraalVM native-image compatibility (ELF time!) makes it seem quite refreshing.

I’ll go through the framework as I build something with these two questions in mind: Will these things make a (noticeable) difference in lightness? Will it feel more like joy to a developer?

Getting started

The Helidon documentation seems at first glance to be a classic huge reference website with a deluge of guides, however to my great delight, it’s just lots of small, well-written light articles. Smooth. We continue our exploration.

The quick-start guide is quite simple: the steps you must do to get running are also very simple:

  1. invoke a mvn archetype:generate command and out pops a project.
  2. cd into it and run mvn package.
  3. java -jar target/<artifactId>.jar

Well, that’s nice. No strange invocations of maven build tools, no weird maven plugins to run things, just a plain java -jar. Alright. I’ve created a small project named after my current tea choice: baihao yinzhen3.

Checking the generated pom.xml there is a non-negligible number of direct dependencies but all seem, in theory, to be quite reasonable defaults for a starter micro-service: a web-server, config-yaml, a web-client, jakarta json-api, http-media-jsonp, some metrics and health check libraries. Plus some <scope>test</scope> libraries like junit.

Well, let’s run it!

$ java -jar target/yinzhen.jar
2024.01.05 00:09:52.523 Helidon SE 4.0.2 features: [Config, Encoding, Health, Media, Metrics, Observe, WebServer]
2024.01.05 00:09:52.528 [0x284e3cba] http://0.0.0.0:8080 bound for socket '@default'
2024.01.05 00:09:52.555 Started all channels in 35 milliseconds. 545 milliseconds since JVM startup. Java 21.0.1+12-LTS
WEB server is up! http://localhost:8080/simple-greet

Hot damn. That was fast. A GET request works fine. We get hello world back. Let’s see with Graal.

Switching to GraalVM is as simple as running sdk use java 21.0.1-graalce if you’re using SDKMAN!. Now to build our native image, we should only have to run mvn package -Pnative-image. Let’s go!

$ mvn package -Pnative-image
[...]
Produced artifacts:
 /home/billy/projects/yinzhen/target/yinzhen (executable)
========================================================================================================================
Finished generating 'yinzhen' in 1m 20s.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  01:28 min
[INFO] Finished at: 2024-01-05T00:17:52+01:00
[INFO] ------------------------------------------------------------------------

Hm, it seems GraalVM native image has got much faster than I remember in addition to no longer being packaged separately. Lovely - after all this on my 4y-old Think-Pad whose fans are starting to sound slightly strained.

$ ./target/yinzhen
2024.01.05 00:19:35.633 Logging at runtime configured using classpath: /logging.properties
2024.01.05 00:19:35.653 Helidon SE 4.0.2 features: [Config, Encoding, Health, Media, Metrics, Observe, WebServer]
2024.01.05 00:19:35.653 [0x5e24dfe8] http://0.0.0.0:8080 bound for socket '@default'
2024.01.05 00:19:35.653 Started all channels in 0 milliseconds. 22 milliseconds since JVM startup. Java 21.0.1+12-jvmci-23.1-b19
WEB server is up! http://localhost:8080/simple-greet

An excellent speed up. You’d be forgiven for thinking this was written in Crystal or Go.

Health checks

Helidon comes with built-in health checks for deadlocks, disk space and available heap memory. Alright, that seems fair. A curl call to the endpoint nets us:

> GET /observe/health HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 204 No Content
< Date: Fri, 5 Jan 2024 00:42:58 +0100
< Connection: keep-alive
< Content-Length: 0
<
* Connection #0 to host localhost left intact

Per the docs, that means everything is UP! and good. Well, what if we want details? we need to change the default server configuration to expose this. In src/main/resources/application.yaml we add the features config node.

server:
  port: 8080
  host: 0.0.0.0
  features:
    observe:
      observers:
        health:
          details: true

We recompile and re-run. Thanks to GraalVM only taking 1m20s I’ll do it with a native-image generation. Oh no! Enabling details in our health checks has added 1 ms startup!

$  ./target/yinzhen
2024.01.05 00:48:00.317 Logging at runtime configured using classpath: /logging.properties
2024.01.05 00:48:00.338 Helidon SE 4.0.2 features: [Config, Encoding, Health, Media, Metrics, Observe, WebServer]
2024.01.05 00:48:00.339 [0x449d9103] http://0.0.0.0:8080 bound for socket '@default'
2024.01.05 00:48:00.339 Started all channels in 1 milliseconds. 23 milliseconds since JVM startup. Java 21.0.1+12-jvmci-23.1-b19
WEB server is up! http://localhost:8080/simple-greet

Let’s see what we get now using curl http://localhost:8080/observe/health.

{
  "status": "UP",
  "checks": [
    {
      "name": "diskSpace",
      "status": "UP",
      "data": {
        "total": "800.03 GB",
        "percentFree": "31.09%",
        "totalBytes": 859020607488,
        "free": "248.76 GB",
        "freeBytes": 267099226112
      }
    },
    {
      "name": "heapMemory",
      "status": "UP",
      "data": {
        "total": "50.06 GB",
        "percentFree": "99.96%",
        "max": "50.06 GB",
        "totalBytes": 53750005760,
        "maxBytes": 53750005760,
        "free": "50.04 GB",
        "freeBytes": 53726412800
      }
    }
  ]
}

a quick reply shows my SSD is running low on space but heap memory is good (what a small 50GB micro-service).

Let’s build a small service

I want to shove image data and have things returned. Not sure what but we’ll figure something out.

Reading the web-server docs I’m presented with two choices: implement a handler method or write a class implementing the io.helidon.webserver.HttpService interface with multiple handlers. A handler function is nice and easy in many cases but let’s implement a service to see if and how messy we have to get our hands.

I write up PictureService that calls on a PictureController class to save things to disk.

package com.redpilllinpro.baihao.yinzhen;

import io.helidon.http.Status;
import io.helidon.webserver.http.HttpRules;
import io.helidon.webserver.http.HttpService;
import io.helidon.webserver.http.ServerRequest;
import io.helidon.webserver.http.ServerResponse;
import java.io.*;

/**
 * We'll do something with pictures
 */

class PictureService implements HttpService {
  private PictureController pics = new PictureController();

  @Override
  public void routing(HttpRules rules) {
    rules.post("/pictures", this::postHandler);
  }

  private void postHandler(ServerRequest req, ServerResponse res) {
    if (!req.content().hasEntity())
      res.status(Status.BAD_REQUEST_400).send("You have to upload something!");
    try {
      var uuid = pics.saveImage(req.content().inputStream());
      res.header("Location", "/pictures/" + uuid);
      res.status(Status.CREATED_201)
          .send("Thanks!");
    } catch (IOException e) {
      responseServerError(e, res);
    }
  }
  private void responseServerError(Object meta, ServerResponse res) {
    System.err.println(meta);
    res.status(Status.INTERNAL_SERVER_ERROR_500).send("Oops!");
  }
}

Well, that wasn’t much. The only thing I’m really required to implement is a routing function.

Now I’ll create a PictureController to hold our save image method and then write some very goofy I/O code that ought to be very blocking. We’re creating a file, writing out the stream from the request, doing a peek-read using MediaTypes.detectType(savePath), then serializing a Java Record (!) to disk as well. Why? Why not!

package com.redpilllinpro.baihao.yinzhen;

import io.helidon.common.media.type.MediaTypes;

import java.io.*;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;
import java.util.Optional;
import java.util.UUID;

public class PictureController {
    String saveImage(InputStream rawImageStream) throws IOException {
        var uuid = UUID.randomUUID().toString();
        var savePath = Path.of("./", "localstorage", uuid);
        Files.createFile(savePath);
        Files.copy(rawImageStream, savePath, StandardCopyOption.REPLACE_EXISTING);
        var type = MediaTypes.detectType(savePath).orElse(MediaTypes.APPLICATION_OCTET_STREAM);
        var size = Files.size(savePath);
        saveMetadata(new imageMetadata(uuid, type.text(), size));
        return uuid;
    }
    void saveMetadata(imageMetadata meta) {
        var path = Path.of("./", "metastorage", meta.identity + ".meta");
        try(var outMeta = new ObjectOutputStream(new FileOutputStream(path.toFile()))) {
            outMeta.writeObject(meta);
            System.out.println("Saved [" + meta + "] to " + path);
        } catch (IOException e) {
            System.err.println("Couldn't save metadata for " + meta.identity);
        }
    }
    record imageMetadata(String identity, String mediaType, long size) implements Serializable {};

We have to register our new service with the Main routing method. We modify Main.java a bit.4

(...)
    /**
     * Updates HTTP Routing.
     */
    static void routing(HttpRouting.Builder routing) {
        routing.register("/pictures", new PictureService())
               .register("/greet", new GreetService())
               .get("/simple-greet", (req, res) -> res.send("Hello World!"));
    }

Let’s see if this works! I compile this with GraalVM to a native image and run a simple curl http://localhost:8000/pictures --data-binary @cat-meme.png I had.

$ ./target/yinzhen
2024.01.08 23:25:54.008 Logging at runtime configured using classpath: /logging.properties
2024.01.08 23:25:54.052 Helidon SE 4.0.2 features: [Config, Encoding, Health, Media, Metrics, Observe, WebServer]
2024.01.08 23:25:54.053 [0x3b01a0d0] http://0.0.0.0:8080 bound for socket '@default'
2024.01.08 23:25:54.053 Started all channels in 1 milliseconds. 72 milliseconds since JVM startup. Java 21.0.1+12-jvmci-23.1-b19
WEB server is up! http://localhost:8080/pictures
Saved [imageMetadata[identity=599da94d-b41d-4ca1-9bc6-9d3804b4bcac, mediaType=image/png, size=1055902]] to ./metastorage/599da94d-b41d-4ca1-9bc6-9d3804b4bcac.meta

Well that’s nice. I’ll add some methods to do the same for GET, so we modify the PictureService class again, updating the routing method to handle a GET.

@Override
public void routing(HttpRules rules) {
  rules
      .post("/", this::postHandler)
      .get("/{uuid}", this::getHandler);
}

We can now pull the uuid parameter directly from an inbound ServerRequest object invoking path().pathParameters().get("uuid").5 Note that this is all under the /pictures path as we wired it in Main::routing - our new GET endpoint has the full path /pictures/{uuid} for our service.

After a bit of writing, I now have extended both the service and the I/O-writer class to handle read operations as well, the inverse of our write and also dumb expensive.

Application-specific metrics

Let’s enable some metrics so we can see what happens when we start hammering the app with requests. We modify the application.yaml configuration to track long-running requests, in-flight ones and requests that were deferred following the metrics guide. We don’t have to be YAML wizards but could also write this all in Java using a fluent API.

server:
  features:
    observe:
      observers:
        metrics:
          key-performance-indicators:
            extended: true
            long-running:
              threshold-ms: 28

Re-compile, run a few requests and presto

$ curl -H "Accept: application/json"  http://localhost:8080/observe/metrics | jq ".vendor"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   580  100   580    0     0   288k      0 --:--:-- --:--:-- --:--:--  566k
{
  "requests.count": 6,
  "requests.load": 6,
  "requests.longRunning": 0,
  "requests.deferred": 0.0,
  "requests.inFlight": 1.0
  }

Hammer time

Let’s run wrk2 to do 1000 r/s fetching data. Remember, there’s no caching and I’ve written no concurrent code to handle dispatching for the I/O work. Now both wrk2 and the native image are running on the same old Think-Pad which means it’s not an ideal situation but we can at least sanity check if our code is slow like molasses.

$ wrk -c600 -d30s -t16 -R1000 -L http://localhost:8080/pictures/3265a234-8968-42e7-bec0-79c55e1c2abe
Running 30s test @ http://localhost:8080/pictures/3265a234-8968-42e7-bec0-79c55e1c2abe
16 threads and 600 connections
(...)
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%   61.60ms
 75.000%   81.54ms
 90.000%  101.31ms
 99.000%  142.98ms
 99.900%  181.50ms
 99.990%  200.19ms
 99.999%  216.70ms
100.000%  216.70ms
(...)
30076 requests in 30.22s, 29.61GB read
Requests/sec:    995.08
Transfer/sec:      0.98GB

I cut out the long histogram tables and thread calibration bits but as we can see, we have quite good performance for a service that was written as an incredibly wasteful single-threaded blocking Java with a terrible test reading the same file every time for every request! Our throughput remains where we specified it with a latency of at most 150 ms for 99% of requests. I ran calls against the /observe/metrics endpoint during testing and saw at most 4 in-flight messages. Smooth.

Downsides

Helidon 4 SE doesn’t seem to have as much documentation or guides available as many other frameworks which means if you get stuck in a pickle or are confused about something (it happens to us all6) you will have to put on your debug goggles, read Java-docs, maybe even skim the source code - just like writing plain Java! However, the Github project contains a FAQ and allows asking questions directly to the maintainers. Very friendly.

Conclusion (tl;dr)

This was fun. Writing small Java methods, single-threaded-style code, avoiding annotations and just building plain Java objects, implementing interfaces, all without having to noticeably sacrifice performance and still getting a web service on the other side of javac? This makes Java CRUD-development look more like Ruby’s Sinatra without sprinkling @RestController, @Factory, @SingletonCupOfJava,@Method("GET") or weird runtime reflection. I’m happy!

What more?

There is a virtual-thread friendly database client that allows quite nice parametric embedding of SQL, a library to interface with the OpenAPI world, a few more observability libraries, Kafka, JMS and similar sinks. A security layer exists (OIDC,Basic,Digest,Signatures,ABAC,Header assertions), a web client, tracing, CORS and gRPC. You name it!

All in all, this does have a lot of batteries available - if you want them. If you want to write your own batteries, all you need to do is implement an interface or two and presto, off you go.

  1. Greek for the barn swallow (H. rustica

  2. Their body shapes allow for very efficient flight; the metabolic rate of swallows in flight is 49–72% lower than equivalent passerines of the same size. 

  3. Delicious tea, see en:wp:Baihao Yinzhen 

  4. If you remove GreetService.java and simple-greet the default tests will fail - for expediency I’ve left them in but either rewrite the tests or use -DskipTests=true when running mvn commands. 

  5. If you prefer an OptionalValue-wrapped result, use the first("uuid") method. 

  6. Especially me. 

Billy J. Beltran

Consultant at Redpill Linpro

Billy writes APIs, wrangles Apache Camels, massages data and evangelizes about using the right tool for the right problem (Clojure). M-x butterfly C-M-c user.

Comparison of different compression tools

Working with various compressed files on a daily basis, I found I didn’t actually know how the different tools performed compared to each other. I know different compression will best fit different types of data, but I wanted to compare using a large generic file.

The setup

The file I chose was a 4194304000 byte (4.0 GB) Ubuntu installation disk image.

The machine tasked with doing of the bit-mashing was an Ubuntu with a AMD Ryzen 9 5900X 12-Core ... [continue reading]

Why TCP keepalive may be important

Published on December 17, 2024

The irony of insecure security software

Published on December 09, 2024