After the announcement of Project Loom and the release of Java 21, many Microservices frameworks declared support for Virtual threads.
Helidon 4 is actually based on them! So, what is the difference between supporting Virtual threads and being based on them? We’ll discuss this in this post.
Threads in Java are just wrappers over OS threads. This fact makes them quite an expensive resource. You can’t make too much of them. It takes significant time to start a thread and to switch between them. So, we reuse threads in different Thread Pool Executors.
The nature of the typical tasks for serving Microservices is that there are many of them, they run in parallel, and they run pretty short, so they do not obstruct threads. Usually, requests are more than threads available. Running each request was not optimal, so the industry moved to a reactive paradigm and asynchronous processing.
Most frameworks rely on Netty, which asynchronously manages connections to send data over them, thus providing good performance. Developers often should create asynchronous code to facilitate performance. And it is hard to write, debug and maintain.
When Project Loom emerged, many Microservices frameworks started experimenting with it by changing the regular Thread Pool Executors to Virtual Threads Pool Executors.
Or create a hybrid schema to offload some tasks to Virtual threads based on some markers (like annotations, for example)
This approach demonstrated some performance gain since there were more (virtual) threads available in the executor. But there was all this reactive/asynchronous overhead used for task management.
In Helidon we also experimented with this approach. From the early versions of Helidon 2 and 3 there was an option to enable Loom thread pool executor on Netty. But there was no such a big performance impact, and the heavy reactive hard-to-maintain programming model was preserved.
So we decided to remove it and create a new Web Server from scratch, which just creates a new virtual thread for each request and lets JVM manage those virtual threads. Everything is rewritten in the blocking paradigm!
This means that JVM will schedule each virtual thread with the request to the available carrier thread.
Whenever a blocking operation occurs, for example, IO or a network call, the JVM scheduler unmounts the blocked virtual thread, finds another virtual thread and mounts it to the free carrier thread. So, its resources will be reused.
As the blocking operations are over, the virtual thread will be scheduled to continue execution on the first. And this is the Major difference between “Virtual threads support” and being “based on Virtual threads”!
As a “side effect”, the MicroProfile applications created with Helidon MP should run significantly faster since MicroProfile APIs are mainly blocking.
Microservices..
yes, nowadays they are everywhere! In my previous post I’ve shared my thoughts
that sometimes just a monolith can do the required task without being cut into
microservices. But it is wonderful if microservices are the most suitable
solution! This is definitely lovely!
First,
we’ve got a lot of benefit it provides to us, second, we’ve got a great
technology support to implement them – there are a lot of frameworks and platforms
helping us develop some really good microservices.
The most of
the conference talks I have recently seen are basically “Hello world” talks.
The main message is “Hey, look how easy it is to create a microservice!”, “Just
few annotations, and you are up and running!”. How great this is!
But when it
comes to real life, most usually between start of the project till the first
really useful microservice there is at least one sprint. More often three
sprints. You’ll say – “That is ok! We are serious people doing serious
enterprise!”. I still ask myself – “What can be so huge in one microservice,
that it may require several weeks to be created?”. “Why then this service is
called MICRO?”
I’m not
sure if there is an official definition of a microservice, but a well-established
idea is that it is a piece of software which is designed to perform only one
single function, but in the best possible way. It should communicate with the
rest of the world with the most lightweight protocol possible, like REST. So,
if we have a login functionality, a microservice should be able to login a user
in the most fast, easy, reliable, secure way. Only login and nothing else. For
logout there should be another service. The size and complexity of such a
service are not in scope of micro, they can be of any scale. Technically, there
should be only one function exposed, but what happens under the curtains are
just implementation detalils.
This is
why, the most of such services become really complicated, often even
overengineered. From my experience, at the end of the day, each microservice in
enterprise webapps becomes a usual three-tier application. It has its “UI”
simplified to REST endpoints, it has its business layer with several services
interfaces and their implementations, an infrastructure layer to talk to DB and
other services. The interlayer communication is done via value objects. As a
result, a usual call of a function of this microservice ends up with several
data coping from one value object to another (in one of such microservices I’ve
seen up to 14 of such hops). Why would you do this? The typical answer is if
some part of the service changes, we will change only this part of it. Fair
enough. If in return I ask, how many times in your life have you ever changed
one of the tiers without affecting the others? Тhe answer is
usually – never.
Once again,
nothing wrong here. The complexity of the implementation of a microservice is
orthogonal to the interface it provides.
Another
question: how many times have you ever received a fully functional “freezed”
interface definition? The spec that never changes? A guaranteed never to change
document? In my case – never!
The changes
are coming constant or even increasing speed. Even in terms of one API version.
You may say: You are doing it wrong! You should plan more carefully, you should
rethink your spec versioning policy etc. And you will be completely wright!
But.. there
is something called real life! There are business needs that require quick
reaction, quick change, quick time to market!
Looks like
this constant change is just inevitable. For my 13 years in software
development there were practically no projects that had really smooth stable
development. A least every sprint there were some “tiny” spec changes, causing all
codebase to suddenly go red. I’m sure I’m not the only “lucky” having this.
How should
we solve this kind of problems in the microservices world? Its quite time
consuming to create beautiful, complex, nicely architectured, (at least) three-tier
microservices without frozen spec. As I already said, those spec changes usually
require changes in all of the layers. This is hard! Most of us thought that’s
just the way it is. Software development is hard.
But this
situation kept me bothering. May be there is a way to expedite this change
reaction? May be a microservice should be micro in all aspects? I mean really
tiny? With as less internal abstractions as possible? Really tight coupled
inside? When I’m writing this, I have a feeling I’m breaking the law. It is
like I’m cancelling everything I’ve learned from the CS courses in the
university. Why don’t we throw away all of the … tiers? Make it a really tiny
one-tied micro monolith? A microlith? So that the object or even a direct line
with data we receive from the infrastructure we transform to JSON object in one
single class? With all the business logic also included in this one and the
same class? You are absolutely eligible to say – “are you nuts?” What if the
infrastructure changes? What if the Rest API changes? What if the business
logic changes? You have to rewrite everything!
And then I suddenly say – “Yes! We will rewrite this service from scratch!”. Luckily, a lot of code in the service can be just generated. So, yes. We will rewrite it from scratch! We copy/paste. We do all the possible anti patterns. Only to make it work according to the spec and pass the tests! “But isn’t it a lot of effort?”. My answer is – “Pretty much the same as changing all of the classes in all of the three (or more) tiers!”. And people usually say – “Hmm…”.
This may
sound really strange, but through several last projects, writing some really
ugly small one tier microservices with really almost no architecture and no
refactoring, then rewriting this from scratch saved me a lot of efforts. I have
even created a name for this piece of software – “Throwawayware!”. Try to say
it quick!
The next
question that usually comes is – “Do you put this ugly thing in production??”.
My answer is:
– “Ok, you got me! No, this does not go to production! May be sometimes..”.
What actually goes there?
Putting this
ugly little thing in production will be just catastrophic. That’s why I’ve
tried some mixed approach.
First of
all, we usually develop the tests to follow the contract. Yep, although I’m not
a big fan of TDD, I think this is a good place to use it. Adjusting the tests
to follow the spec and the contract is always the first priority to us!
Then we
usually have 4 to 10 iterations on each microservice in this ridiculous “throw
away” way. Funny thing is that up to 50% of the services do not even survive through
those iterations! At some point the service may come obsolete even before
production. This means that we usually save a lot of efforts for not creating
something complex for something that will never be used (“Bingo!”).
Then when
the service gets through those several iterations, we consider this service as “survivor”.
At this time usually, the spec stabiles as well. And we do the same for each API
version.
Just like the
Garbage collector in JVM.. Haha! Yep, like the GC! Lovely!
For the survivors
we usually make some really complex refactoring, or rewriting it completely
new. We try to make their code really readable, expandable and maintainable. Quite
often from the usage of those “throw away” implementations we can see some
before unpredicted usage cases, like for example the pressure it should hold,
or security, or fault times etc. This may end in three-tier architecture, but
with really nicely designed layers.
Looks like this “two stages approach” works really well! Funny thing is that it fits perfectly in our SCRUM cycle. May be, I’m wrong, but this approach saved us a lot of effots and helped us establish good quality implementations in shorter time. In three projects so far.. We’ll see how it goes!
Right after Joker 2018 I was approached by one of the speakers Ivan Ugliansky with a very interesting proposal – “Hey, you know I’m from Novosibirsk, we have quite a big conference named CodeFest happening usually in the end of March. Would you like to come to Siberia?”. I answered – “This will be an honor for me!”.
It took me about 12 hours to reach Novosibirsk from Sofia. First 3h30m from Sofia to Moscow. Then I had my shortest night for this year – we took off at 21:30 MSK and landed 5:30 NSK time. The time passed quickly, the Aeroflot flight attendants were we so beautiful, that all of my attention mainly focused on them.
When we landed in Novosibirsk, I just couldn’t stop staring at my GPS position. Wow, I’m almost in the middle of Eurasia!
The first day was quite intense! My great friend Ivan has made a wonderful excursion to a place named “Akademgorod”. In English this may sound like “academics village”. It was created from scratch bak in the 60s to accommodate Soviet workers of science in a very peaceful close to nature environments. As a result it has transformed in a huge scientific cluster. A lot of institutes and research laboratories are now located there. There is also a huge university.
Students and specialists from all over Siberia and even Russia come to this place for study and cool projects!
No surprise, there is a big IT presence here. A lot of Russian and not only Russian companies have their R&D here, and no surprise the CodeFest conference gathers more than 3k attendees.
I had the privilege to be invited to the office of JetBrains
and the office of ExcelsiorJET
I had some truly wonderful time there! We had some pizza and great chat! Huge thanks to Ivan for being such a great host.
The end of the day was dedicated to a special event – the local JUG meetup. Together with Paul Finkelstein I had the honor to give my talk about Java and JPU in front of the local community. The event took place on the 22nd floor in the office of the famous Russian company 2GIS. The location was really awesome!
More than 150 people come to the event! It was fully packed with even standing people!
To be honest, I was a bit scared to be in front of these people. Siberian developers are famous to be the most hardcore developers in the world. Mostly the strongest compiler writers. Isn’t my talk going to be too smoothy for them? At the end of it I was happy to find out that it was really ok! I had a lot of questions afterwards!
It was a sincere pleasure to talk to this community!
The next day was the first day of the CodeFest. Luckily I had my talk scheduled for the day two. It was really good news for me, since I still had some jetlag.
So.. the conference is huge! It has 7 tracks and it is not focused only on Java.
*Siberia is here!!
Actually the variaty of technologies and a SC fields was a big plus for me. I learned a lot of new stuff. I think I was mostly stuck at the Data Science track. I’ve been playing a lot with tensorflow lately, so it was a natural choice for me.
I’ve heard some nice Project management and Team lead oriented talks as well.
I was even interview on the conference radio. I had a lovely chat with Vladimir Plizga from CFT.
The conf is huge!
The day one ended in the local pub with the concert of … the speakers! Singing about.. coding!! Awesome experience!
My talk was scheduled to be the first on day two.. I’ll be honest, I have expected no more than three people to show up. Three listeners would have been a victory for me!
Surprisingly.. it was almost packed for my Microprofile.io talk! (based on the MicroProfile.io tutorial created together with Ivan. St. Ivanov)
Every went smoothly! All of my demos worked! What a releaf!
Usually its not just giving a talk on CodeFest and you are free. After the talk there is an expert zone, where the speaker can make some live discussions and answer audience questions. I spent almost an hour there! I’m really happy that Microprofile.io enjoys so much interest! I hope it was useful for the audience!
The second part of the day I have decided to spend in the city exploring it.
The famous Transsib railway is crossing the big Siberian river Ob here.
The old and new come together!
The cultural life is very intense!
The famous Opera and Ballet theatre is really huge!
Siberia and Novosibirsk are definitely wonderful places to visit! If you have the chance to go there, you definitely should!
I’ve met really wonderful people and had some amazing time!
The next day I was in the airport having something I’ve almost missed – Siberian pelmeni!
Ok, I’m going home! Siberia, you are endless and beautiful!
Yet another 12000 km up in the air! (SOF-SVO-OVB-SVO-SOF)
Huge thanks to Ivan Ugliansky for making this possible.
Yep, Quarqus has definitely exploded just few weeks ago. As one of my friends said, it was marketing lvl8080! Just listen to the slogan – “Supersonic Subatomic Java!”. “A Kubernetes Native Java stack tailored for GraalVM & OpenJDK HotSpot, crafted from the best of breed Java libraries and standards”. Damn, so many buzzwords in one place! The twitter literally exploded as well. Literally everybody twitted about Quarkus.
I said – nah! Yet another microservices framework. There are many available now. The choice is really big. So, I almost ignored it. I’m honest here. Still the last Sunday I’ve downloaded it and played with some examples. My first experience was a failed build. I’ve submitted an issue in GitHub, and it turned to be my local maven issue. The fix was just to upgrade it. But as result this issue opened a discussion, and very same Sunday a maven version enforcer was added. The power of OSS! There are no weekend in OSS! Kudos to the team!
The twit flow nevertheless continued to appear in my feed. This somehow forced to open the Quarkus website and explore more. There was one logo that grabbed my attention:
I said – mmm MicroProfile. Lovin’ it! So it looks like I may try to pack a microservice (which is not Spring) in a GraalVM native image.
Ok, I need to try! This thought came to my mind about 3 a.m. this Thursday. Well, technically it was Friday already…
Luckily enough about a month ago we have deployed some CSV aggregation service. Yet another stage of some fancy bloody enterprise infinite process to generate some invoices. We have made it the most modern way using only MicroProfile specs, on WildFly server, running in some OpenShift pods (yes, we have a lot of RedHat licences). Mmm, a loot of cool buzzwords as well!
The service itself was completely stateless. With three rest clients we gather some data from there other services, mash them up in some CSV and serve it as a plain text on rest request! Technically it is insanely stupid! … making it the perfect candidate to be tested with Quarkus. By the way, all of the pipeline is still under development, so thus we’re not going to ruin someone’s bill and invoice.
Still the service has to work under some pressure. Thus OpenShift scales by adding pods. Since the service is stateless, we are totally ok with that! We also don’t have to care about rest providers from the other side. It’s their responsibility to scale 🙂
So, now comes the fun part!!! I’ve started a new project by copying some of the Quarkus quick–starters. Yes, I can generate a pom from an architype.. but I’m lazy.
I then just copy-pasted the code from my old project.. and it shined red.. Especially the the MicroProfile annotations.. ah. What’s wrong? 3 minutes of googling told me that for the MicroProfile stuff I need to add some Smallrye Quarkus extentions.. Ok, copy–pasted in the pom.xml.. Yay, the red is gone.
mvn clean package
… and BUILD SUCCESSFUL!
Ok, you made me sweat! Now let us:
mvn quarkus:dev
.. and .. http://localhost:8080/api/v1/invoicingRXT/1253211 (some contract)
AND I HAVE THE RESULT! That’s a WOW!!! This was one the fastest migrations I’ve ever done! I LOVE STANDARDS! Long live MicroProfile!!!
Ok, I have earned my coffee. It took me.. 17 minutes. Yes, this was mostly copy–paste, but look how cool is that!
… But, thats not over! Quarkus documentation says if I add -Pnative to the maven build command it’ll produce a native image! Let’s do it! Luckily I had already setup the GraalVM installation.
mvn clean package -Pnative
Waiting.. 1 minute… 2 minutes.. BUILD SUCCESSFUL! Lovely! Yes, it takes some time to build an image! I should admit, that the business logic code was written in quite straightforward way, without any fancy constructions, just some POJOs. There were no GraalVM specific issues.
Now let us just run the executable:
./invoicingRTXApp-runner
and the service is up and running in about a second! Although the console says the startup time is 0,212 sec. Technically from the run command to running service is about 2 seconds. THAT DOESN’T COMPARE to ~49 seconds startup time of the WildFly server..
Ok, now lets go to http://localhost:8080/api/v1/invoicingRXT/1253211 and what I see:
“”,””,””,””,””
Something went wrong!!! Why the example from the quick-starters work? Hmm… It looks like I’m missing one annotation over some classes – @RegisterForReflection. Reflection works a little different in GraalVM. Building it once again. Waiting another two minutes. Oh, how can two minutes can last so long..
Good! BUILD SUCCESSFUL! Now lets go to http://localhost:8080/api/v1/invoicingRXT/1253211 and what I see:
data1,data2,data3
IT WORKED!!! (Now imagine the famous Dexter from Dexter’s lab cartoon shouting this loud). That is so damn cool!!!
Nice! It’s been 42 minutes since the beginning of the experiment (coffee break included)!
Ok, now let us go back to the OpenShift setup. It will be nice to see if it’s ok in our test environments under some pressure. After making some yaml permutations I’ve rerouted 10% of the traffic to go the new native image pod. After 4 hours watching it work I see no errors! Only sometimes I receive some messages from surprised testers like “Something’s wrong, some pods start in 3 seconds only…”. And I say “Haha! This is magic.”
What can I say – Quarkus is definitely a good thing to play with. It showed extremely well in (almost) production code. Yes, I know the example is really stupid, but that’s a real world demand. The migration took me about 40 minutes by just copy–paste. I love standards!
The coming Monday I’ll make a full native setup, we’ll see how it goes! May be even in production soon..
Disclaimer: Sorry, I can’t share the code. It’s corporate..