Tag Archives: microprofile

Thoughts on microservices: Throwawayware.

Картинки по запросу throw away this

Microservices.. yes, nowadays they are everywhere! In my previous post I’ve shared my thoughts that sometimes just a monolith can do the required task without being cut into microservices. But it is wonderful if microservices are the most suitable solution! This is definitely lovely!

First, we’ve got a lot of benefit it provides to us, second, we’ve got a great technology support to implement them – there are a lot of frameworks and platforms helping us develop some really good microservices.

The most of the conference talks I have recently seen are basically “Hello world” talks. The main message is “Hey, look how easy it is to create a microservice!”, “Just few annotations, and you are up and running!”. How great this is!

But when it comes to real life, most usually between start of the project till the first really useful microservice there is at least one sprint. More often three sprints. You’ll say – “That is ok! We are serious people doing serious enterprise!”. I still ask myself – “What can be so huge in one microservice, that it may require several weeks to be created?”. “Why then this service is called MICRO?”

I’m not sure if there is an official definition of a microservice, but a well-established idea is that it is a piece of software which is designed to perform only one single function, but in the best possible way. It should communicate with the rest of the world with the most lightweight protocol possible, like REST. So, if we have a login functionality, a microservice should be able to login a user in the most fast, easy, reliable, secure way. Only login and nothing else. For logout there should be another service. The size and complexity of such a service are not in scope of micro, they can be of any scale. Technically, there should be only one function exposed, but what happens under the curtains are just implementation detalils.

This is why, the most of such services become really complicated, often even overengineered. From my experience, at the end of the day, each microservice in enterprise webapps becomes a usual three-tier application. It has its “UI” simplified to REST endpoints, it has its business layer with several services interfaces and their implementations, an infrastructure layer to talk to DB and other services. The interlayer communication is done via value objects. As a result, a usual call of a function of this microservice ends up with several data coping from one value object to another (in one of such microservices I’ve seen up to 14 of such hops). Why would you do this? The typical answer is if some part of the service changes, we will change only this part of it. Fair enough. If in return I ask, how many times in your life have you ever changed one of the tiers without affecting the others? Тhe answer is usually – never.

Once again, nothing wrong here. The complexity of the implementation of a microservice is orthogonal to the interface it provides.

Another question: how many times have you ever received a fully functional “freezed” interface definition? The spec that never changes? A guaranteed never to change document? In my case – never!

The changes are coming constant or even increasing speed. Even in terms of one API version. You may say: You are doing it wrong! You should plan more carefully, you should rethink your spec versioning policy etc. And you will be completely wright!

But.. there is something called real life! There are business needs that require quick reaction, quick change, quick time to market!

Looks like this constant change is just inevitable. For my 13 years in software development there were practically no projects that had really smooth stable development. A least every sprint there were some “tiny” spec changes, causing all codebase to suddenly go red. I’m sure I’m not the only “lucky” having this.

How should we solve this kind of problems in the microservices world? Its quite time consuming to create beautiful, complex, nicely architectured, (at least) three-tier microservices without frozen spec. As I already said, those spec changes usually require changes in all of the layers. This is hard! Most of us thought that’s just the way it is. Software development is hard.

But this situation kept me bothering. May be there is a way to expedite this change reaction? May be a microservice should be micro in all aspects? I mean really tiny? With as less internal abstractions as possible? Really tight coupled inside? When I’m writing this, I have a feeling I’m breaking the law. It is like I’m cancelling everything I’ve learned from the CS courses in the university. Why don’t we throw away all of the … tiers? Make it a really tiny one-tied micro monolith? A microlith? So that the object or even a direct line with data we receive from the infrastructure we transform to JSON object in one single class? With all the business logic also included in this one and the same class? You are absolutely eligible to say – “are you nuts?” What if the infrastructure changes? What if the Rest API changes? What if the business logic changes? You have to rewrite everything!

And then I suddenly say – “Yes! We will rewrite this service from scratch!”. Luckily, a lot of code in the service can be just generated. So, yes. We will rewrite it from scratch! We copy/paste. We do all the possible anti patterns. Only to make it work according to the spec and pass the tests! “But isn’t it a lot of effort?”. My answer is – “Pretty much the same as changing all of the classes in all of the three (or more) tiers!”. And people usually say – “Hmm…”.

This may sound really strange, but through several last projects, writing some really ugly small one tier microservices with really almost no architecture and no refactoring, then rewriting this from scratch saved me a lot of efforts. I have even created a name for this piece of software – “Throwawayware!”. Try to say it quick!

The next question that usually comes is – “Do you put this ugly thing in production??”.

My answer is: – “Ok, you got me! No, this does not go to production! May be sometimes..”. What actually goes there?

Putting this ugly little thing in production will be just catastrophic. That’s why I’ve tried some mixed approach.

First of all, we usually develop the tests to follow the contract. Yep, although I’m not a big fan of TDD, I think this is a good place to use it. Adjusting the tests to follow the spec and the contract is always the first priority to us!

Then we usually have 4 to 10 iterations on each microservice in this ridiculous “throw away” way. Funny thing is that up to 50% of the services do not even survive through those iterations! At some point the service may come obsolete even before production. This means that we usually save a lot of efforts for not creating something complex for something that will never be used (“Bingo!”).

Then when the service gets through those several iterations, we consider this service as “survivor”. At this time usually, the spec stabiles as well. And we do the same for each API version.

Just like the Garbage collector in JVM.. Haha! Yep, like the GC! Lovely!

For the survivors we usually make some really complex refactoring, or rewriting it completely new. We try to make their code really readable, expandable and maintainable. Quite often from the usage of those “throw away” implementations we can see some before unpredicted usage cases, like for example the pressure it should hold, or security, or fault times etc. This may end in three-tier architecture, but with really nicely designed layers.

Looks like this “two stages approach” works really well! Funny thing is that it fits perfectly in our SCRUM cycle. May be, I’m wrong, but this approach saved us a lot of effots and helped us establish good quality implementations in shorter time. In three projects so far.. We’ll see how it goes!

Any critic is welcome though…

Quarkus: a blast from the future

Yep, Quarqus has definitely exploded just few weeks ago. As one of my friends said, it was marketing lvl8080! Just listen to the slogan – “Supersonic Subatomic Java!”. “A Kubernetes Native Java stack tailored for GraalVM & OpenJDK HotSpot, crafted from the best of breed Java libraries and standards”. Damn, so many buzzwords in one place! The twitter literally exploded as well. Literally everybody twitted about Quarkus.

I said – nah! Yet another microservices framework. There are many available now. The choice is really big. So, I almost ignored it. I’m honest here. Still the last Sunday I’ve downloaded it and played with some examples. My first experience was a failed build. I’ve submitted an issue in GitHub, and it turned to be my local maven issue. The fix was just to upgrade it. But as result this issue opened a discussion, and very same Sunday a maven version enforcer was added. The power of OSS! There are no weekend in OSS! Kudos to the team!

The twit flow nevertheless continued to appear in my feed. This somehow forced to open the Quarkus website and explore more. There was one logo that grabbed my attention:

I said – mmm MicroProfile. Lovin’ it! So it looks like I may try to pack a microservice (which is not Spring) in a GraalVM native image.

Ok, I need to try! This thought came to my mind about 3 a.m. this Thursday. Well, technically it was Friday already…

Luckily enough about a month ago we have deployed some CSV aggregation service. Yet another stage of some fancy bloody enterprise infinite process to generate some invoices. We have made it the most modern way using only MicroProfile specs, on WildFly server, running in some OpenShift pods (yes, we have a lot of RedHat licences). Mmm, a loot of cool buzzwords as well!

The service itself was completely stateless. With three rest clients we gather some data from there other services, mash them up in some CSV and serve it as a plain text on rest request! Technically it is insanely stupid! … making it the perfect candidate to be tested with Quarkus. By the way, all of the pipeline is still under development, so thus we’re not going to ruin someone’s bill and invoice.

Still the service has to work under some pressure. Thus OpenShift scales by adding pods. Since the service is stateless, we are totally ok with that! We also don’t have to care about rest providers from the other side. It’s their responsibility to scale 🙂

So, now comes the fun part!!! I’ve started a new project by copying some of the Quarkus quick–starters. Yes, I can generate a pom from an architype.. but I’m lazy.

I then just copy-pasted the code from my old project.. and it shined red.. Especially the the MicroProfile annotations.. ah. What’s wrong? 3 minutes of googling told me that for the MicroProfile stuff I need to add some Smallrye Quarkus extentions.. Ok, copy–pasted in the pom.xml.. Yay, the red is gone.

mvn clean package

… and BUILD SUCCESSFUL!

Ok, you made me sweat! Now let us:

mvn quarkus:dev

.. and .. http://localhost:8080/api/v1/invoicingRXT/1253211 (some contract)

AND I HAVE THE RESULT! That’s a WOW!!! This was one the fastest migrations I’ve ever done! I LOVE STANDARDS! Long live MicroProfile!!!

Ok, I have earned my coffee. It took me.. 17 minutes. Yes, this was mostly copy–paste, but look how cool is that!

… But, thats not over! Quarkus documentation says if I add -Pnative to the maven build command it’ll produce a native image! Let’s do it! Luckily I had already setup the GraalVM installation.

mvn clean package -Pnative

Waiting.. 1 minute… 2 minutes.. BUILD SUCCESSFUL! Lovely! Yes, it takes some time to build an image! I should admit, that the business logic code was written in quite straightforward way, without any fancy constructions, just some POJOs. There were no GraalVM specific issues.

Now let us just run the executable:

./invoicingRTXApp-runner

and the service is up and running in about a second! Although the console says the startup time is 0,212 sec. Technically from the run command to running service is about 2 seconds. THAT DOESN’T COMPARE to ~49 seconds startup time of the WildFly server..

Ok, now lets go to http://localhost:8080/api/v1/invoicingRXT/1253211 and what I see:

“”,””,””,””,””

Something went wrong!!! Why the example from the quick-starters work? Hmm… It looks like I’m missing one annotation over some classes – @RegisterForReflection. Reflection works a little different in GraalVM. Building it once again. Waiting another two minutes. Oh, how can two minutes can last so long..

Good! BUILD SUCCESSFUL! Now lets go to http://localhost:8080/api/v1/invoicingRXT/1253211 and what I see:

data1,data2,data3

IT WORKED!!! (Now imagine the famous Dexter from Dexter’s lab cartoon shouting this loud). That is so damn cool!!!

Картинки по запросу dexters lab it works

Nice! It’s been 42 minutes since the beginning of the experiment (coffee break included)!

Ok, now let us go back to the OpenShift setup. It will be nice to see if it’s ok in our test environments under some pressure. After making some yaml permutations I’ve rerouted 10% of the traffic to go the new native image pod. After 4 hours watching it work I see no errors! Only sometimes I receive some messages from surprised testers like “Something’s wrong, some pods start in 3 seconds only…”. And I say “Haha! This is magic.”

Now some intermediate conclusions:

– WildFly 78 Mb image, Native 21 Mb image;

– WildFly ~49 seconds start-up, Native ~2 seconds start-up time;

– WildFly ~300 ms per request, Native ~270 ms per request (we are dependent on other services).

– Wildfly ~359 Mb RAM (serving 1 request), Native ~22 Mb RAM (serving 1 request)

Lovely! Just, Lovely!

What can I say – Quarkus is definitely a good thing to play with. It showed extremely well in (almost) production code. Yes, I know the example is really stupid, but that’s a real world demand. The migration took me about 40 minutes by just copy–paste. I love standards!

The coming Monday I’ll make a full native setup, we’ll see how it goes! May be even in production soon..

Disclaimer: Sorry, I can’t share the code. It’s corporate..