After the announcement of Project Loom and the release of Java 21, many Microservices frameworks declared support for Virtual threads.
Helidon 4 is actually based on them! So, what is the difference between supporting Virtual threads and being based on them? We’ll discuss this in this post.
Threads in Java are just wrappers over OS threads. This fact makes them quite an expensive resource. You can’t make too much of them. It takes significant time to start a thread and to switch between them. So, we reuse threads in different Thread Pool Executors.
The nature of the typical tasks for serving Microservices is that there are many of them, they run in parallel, and they run pretty short, so they do not obstruct threads. Usually, requests are more than threads available. Running each request was not optimal, so the industry moved to a reactive paradigm and asynchronous processing.
Most frameworks rely on Netty, which asynchronously manages connections to send data over them, thus providing good performance. Developers often should create asynchronous code to facilitate performance. And it is hard to write, debug and maintain.
When Project Loom emerged, many Microservices frameworks started experimenting with it by changing the regular Thread Pool Executors to Virtual Threads Pool Executors.
Or create a hybrid schema to offload some tasks to Virtual threads based on some markers (like annotations, for example)
This approach demonstrated some performance gain since there were more (virtual) threads available in the executor. But there was all this reactive/asynchronous overhead used for task management.
In Helidon we also experimented with this approach. From the early versions of Helidon 2 and 3 there was an option to enable Loom thread pool executor on Netty. But there was no such a big performance impact, and the heavy reactive hard-to-maintain programming model was preserved.
So we decided to remove it and create a new Web Server from scratch, which just creates a new virtual thread for each request and lets JVM manage those virtual threads. Everything is rewritten in the blocking paradigm!
This means that JVM will schedule each virtual thread with the request to the available carrier thread.
Whenever a blocking operation occurs, for example, IO or a network call, the JVM scheduler unmounts the blocked virtual thread, finds another virtual thread and mounts it to the free carrier thread. So, its resources will be reused.
As the blocking operations are over, the virtual thread will be scheduled to continue execution on the first. And this is the Major difference between “Virtual threads support” and being “based on Virtual threads”!
As a “side effect”, the MicroProfile applications created with Helidon MP should run significantly faster since MicroProfile APIs are mainly blocking.
To summarise, with Helidon 4, you can enjoy creating high-performance applications using the blocking paradigm, creating code which is easy to maintain and debug! Check http://helidon.io, our blog https://medium.com/helidon and https://github.com/helidon-io/helidon/ for more details!