All posts by dalexandrov

Trip Report: JavaZone 2018 🇳🇴

Hmm, too few posts. I definitely have to change this. In my defence I’ll say, that only for this summer I’ve visited 12 countries. But you will argue, that for example Josh Long (my hero) made 12 countries only for this week, and he still has written several blog posts for this time. And you a right, I’m too lazy, I have to post more often!

Finally I’m in the local Starbucks, my cappuccino is ready, looks like it’s a perfect time to remember one of the greatest adventures of this year so far – JavaZone 2018!

When earlier this year, in May, I have received this email telling me that my talk “Java and the GPU: unlock the massive performance” was accepted for this year’s edition of JavaZone in Oslo, Norway, I literally jumped out of mu chair shouting out “Yes!!!”.

Visiting this conference was definitely one of my dreams, I’ve heard so many good words about it! Now finally this dream came true. Huge thanks to the committee for accepting my talk!

Finally September has arrived, I’ve landed in Oslo and literally fell in love with Norway from the first site!

An express train brought me from the airport directly to the Oslo central station for only 19 minutes. You may ask – so what? Actually the airport is about ~60 km away from the city and we were cruising with more 220 km/h. And those trains are scheduled to make this route every 10 min. This so cool!

For this conf choosing the train was the best choice, as the venue – Spectrum arena is just next to the central station. The hotel was there as well. So we were perfectly located.

Oslo welcomed us with a grey rainy weather

But just 20 min later it was beautiful and sunny, like in every northern town

In the evening we were invited to the speakers dinner that took place in a beautifully located restaurant with some stunning views of Oslo!

Oh, thats Chris Thalinger also making some photos of this wonderful sunset. By the way it’s raining.

Definitely a wonderful place to relax before the conference. We stayed there until late.

The next day was the first day of the conference. The first thing I saw was the looooong queue of the attendees.

The good (or may be the bad) part was that my talk was scheduled on for the second day. So the first day I had a chance to explore a little bit.

The conference organisation, or I may even say “architecture” is totally different from all other conferences I have attended. The venue is a huge arena, where big concerts take place. There is a big central area where you can find a lot of software company’s booths.

 

Just take a look how big is it and how many people are in!

The conference kick-off was lovely with some guitar players

sdr

The coolest part is that practically all the booth serve their own food all day long! Yes, yes.. you can have food all the time, all day long.. all two days! There was only one major problem for me – I couldn’t taste it all! I haven’t had enough capacity! Definitely OMG.

But not only food, but tons of other cool activities

One of the unique features of JavaZone are the 8 parallel tracks. Yep, 8! That’s a lot! The rooms are themselves not very big, but that makes the cozy.

That’s me attending Vlad Mihalcea’s great talk.

And like every conference the main reward is actually meet and talk to people! Hey, that’s Chris (..and Doug)!

My good friends Simon, Simon, Phillip and my new friend Jennifer

Finally could meet Sharat Chander in person!

Awesome hardcore JVM company of Doug, Volker and Steve

The discussions we had were really useful to me!

At the end of the day we had a wonderful hangout with David, Vincent and Matthew

Then actually came the funny part. My talk was scheduled for 9.00 the next morning. I was sincerely wondering who will ever be interested in GPUs and massive parallel computation so early in the morning? I was a bit nervious wen my talk was approaching.

But I was ready!

So I was expecting a maximum audience of 3 people to show up. What was my surprise when I found my room fully packed!

Oh My God! Absolutely unexpected!

I sincerely want to thank everybody that came. I hope the talk useful and fun to you!

By the way, it became available just few hours later in vimeo! Cool organisation at JavaZone!

When I was done with my talk I took the chance to hang around the beautiful city of Oslo!

Definitely a beautiful capital!

JavaZone 2018 was a great new experience for me! I’ve seen a cool organisation, met great friends and made great ones!

I want to say huge thanks to Rafael, Rustam, Mark and all the team for having me and making this conf a blast!

At the end as always my best five IMHO sessions of the conf from those I was able to see myself live:

Now I’m on may way watching the other talks I’ve missed (and those I’m able to understand)! They are just great!!!

So good bye Oslo! Thank you for being awesome! Hope to see you again!

 

 

Trip report: Devoxx 2017 🇧🇪

I’ve watched all of the published ~180 videos from this year’s Devoxx Belgium. Not all of them till the end, but it was still pretty hard. Also, it’s great nice that the videos are published on the very next day.

This is my first trip report. I’ve never done any of this before, But I think now is a good time to start. And there is a nice reason for this! I had the chance to speak at Devoxx 2017!

Disclaimer: There will be a lot of Me, Myself and I in this post. Its not much technical. It’s just like sharing my own emotions. …I’ll call it a “smoothy post”. If you want to get straight down to my recommended talks (my post-event playlist!) skip to the end section.

Here I’d like to encourage you to attend this type of event, and your employers to support these initiatives. These events are great fuel for personal growth!

All the conferences…

I think the best introduction for this was done in the previous post, so I’ll just copy and pasted it below:

This year was quite a DE(VOXX)ED for me! First I had a chance to speak in wonderful Romania, then beautiful Vienna, after that amazing Lugano, then the loveliest Krakow, finally at the huge Antwerp event! In between I also gave some talks in my home towns Sofia –Java2Days and St. Petersburg – Joker! Let’s not forget that I was in the dreamteam of 5 to organize and run the coolest jPrime conf!

I’ve had quite an intense year!!! Each of these conferences deserves it’s own separate trip report. All of them were just wonderful!

However, I’ve focused on this event because, well, being selected as a speaker at Devoxx Belgium is quite an honоr! And at least twice as honorable to have two sessions!

Why Devoxx Belgium is special?

This is my third time to the city of Antwerpen, but first time to Devoxx. I believe that every developer must go there at least once. Actually, even better would be to go every year! Just because everybody is there, almost all of the Java influencers! Not only can you just listen to the talks, but you can literally talk to them directly. This is what makes this conference truly unique, especially for Europe!

(Well I haven’t been to JavaZone yet. People say it has comparable by attendees count. I hope one day I will be able to go there as well.)

Unfortunately I was not able to be all the five days there, so I’ve missed all the Deep Dive sessions. I was only able to make the three last days, but it was still great!

The funny thing is that to get to Antwerpen, Belgium, the easiest way for me was to make it via Eindhoven, Netherlands. It is only a one hour drive with a car from Antwerpen, and Ryanair flies there.

The event takes place in Kinepolis.

As I came to the venue, I as a speaker I have received a very special gift:

My first impression was that Belgium is not so much to the north for this kind of hats. This looks more like Russian then Belgian! But after spending some time in Antwerp and I have realized that this was actually the best gift for this time of the year! It kept me dry and warm. So thank you very much!

And it’s we’re talking about hats, they were quite trendy at this conference!

(Baruch Sadogursky, Alexey Shipilev, Oleg Shelaev)

Since I came Wednesday afternoon, this day I could not attend any session, just because all of my good friends were there in the lobby.

I just could not stop myself from chatting and greeting everybody!

I was so happy to meet my amazing friend Dr. Venkat Subramaniam and great Kirk Pepperdine!

My dearest friend Roberto!

Edson Yanaga and Peter Palaga!

And of course Sebastian Blanc!!!

(Yes it’s blurry, it was made in the end of the day when everybody was tired)

Meeting people is definitely the coolest part of every conference!

I could have a very interesting discussion with Chris Thalinger about project Sumatra and Johan Vos about Gluon! Chat with Davide Delabassée, meet again with Matt Raible, have some shawarma with Rafael Winterhalter.

These guys are awesome!

By the way at the end of the day I had the chance to attend this speakers BOF by Hubert Sablonnière.

Hubert always hosts very useful discussions about how to speak and perform well at conferences. I have attended first such discussion in Morocco and I liked it a lot!

Day one ended in a very Belgian way:

Day two

Day two (or technically day four) was the day I gave both of my talks. But before that, I was honored to be interviewed by amazing Yolande Poirier About my talk “Java and the GPU”.

It was also a great pleasure to chat with wonderful Katharine Beaumont, with whom we’ve met many times through all of the year on different Voxxed/Devoxx events.

To tell you the truth I liked the venue! In many ways cinemas are great speakers’ perspective: Huge movie screen, plus very comfortable for attendees!

Is a disabled in a wheelchair was always easy for me to find a convenient place as an attendee.

From the organisers’ perspective I know that there are some issues with cinemas, but still it was ok!

Finally it was my turn to speak! The first session was a bit strange for me. It was not exactly technical, it was more social. It was named “Disabled > work > enabled”. I’m truly happy that the organisers gave a chance to me to make this quickie. In those 15 minutes I tried to share my experience, and how it helped me as a disabled to find my place in this industry. I’ve also try to convince companies to hire more remote workers thus equalizing the chances of disabled people to get a good job.

Video can be found here:

My  second talk was “Java on GPU”.

But just before that, quite accidentally I was introduced to Brian Goetz himself by James Weaver. I was really happy! Its a pity I had to prepare for my talk and had to leave the conversation. My deepest respect for both of them!

I knew from the beginning that my talk about the GPU is not a hype talk. So I didn’t expect much people to come. Still I was quite happy that the hall was almost full!

sdr

(This is 15 minutes before the talk)

I must admit that I felt so nice as the demo worked. Since I was using the Amazon cloud, I wasn’t quite sure that the demo gods will support me, but it worked!

The video is available here:

By the way we have shared some of the Bulgarian Java User Group T-shirts in this event for free!!

It was an amazing experience! I want to thank everybody who attended this session I sincerely hope you found something new for you! I had a lot of questions and discussions after the session. Although the scope of the technology is quite limited, if the task is suitable, some huge benefits can be achieved. I’ve tried to demonstrate this. And GPUs can be easily used in the Java world.

After getting done get I’ve got this wonderful feeling of relief! That was a good time to hang around!

And suddenly whom do I see! Josh!! My old buddy Josh Long! OMG!! I must admit he was one of the people who inspired me to try to speak at conferences.

Not only Josh! Brother Mark Heckler is also here!

But thats not all! I’ve met Stephen Chin! I’ve been watching his night hacking series since the very beginning! And now I’ve met him!

At the end of the day there was a special edition of the Russian podcast Razbor Poletov.

We had a live emission together with Baruch Sadogursky, Viktor Gamov, Alexey Shipilev, Alex Borisov, Andrey Cheptsov and Mykyta Protsenoko.

http://razbor-poletov.com/2017/11/episode-144.html (in Russian)

It was a pleasure for me as a regular listener to be part of it this time.

Day three

Day three (or technically day five), was a short day. Unfortunately only half day was allocated for the final sessions of the event. So it was wonderful just to attend the talks. Still it was a tough choice – I wanted to attend all of the sessions! But it all ended up with the wonderful talks by Stuart Marks, Heinz Kabutz and Brian Goetz.

It was great! Definitely a must attend event!

After the conference…

As we had some time and our return flight was scheduled for the next day, we’ve decided to make a small ride to Gent. Absolutely lovely and very beautiful town!

The technical side

Ok, this was the “emotional” part of the event. Now some technical thoughts.

This Devoxx actually represented all of the modern trends in Java (and all around Java) world:

  • Java 9 and modularity;
  • Java Enterprise;
  • Spring and all around Spring;
  • Kotlin;
  • Machine learning and AI;
  • Some Devops/Containers/Clouds;
  • Microservices.
  • Hardcore stuff

As I’ve mentioned above, I’ve watched about ~180 videos of the talks I could not attend. So I’ve made my own short list of the talks I liked the most, including those I’ve seen there myself:

Once again!

It was a truly great experience and a lot of run! Special thanks for Stephen and the team for having me!

and special thanks to Ivan Ivanov, Vlado Pavlov  and the Bulgarian guys/gals for the full time support!

And as member of The Gang of Five which stands behind the jPrime conference, I’ll try share the experience I’ve gained with the Bulgarian Java Community and inside my Company.

I’ll try to make some more posts.. if my talks are accepted elsewhere 🙂

Java as a technology glue, or how to use GPU from JavaScript

Ah! It’s been a while since I’ve posted the last time!

But this year was quite a DE(VOXX)ED for me! First I had a chance to speak in wonderful Romania, then beautiful Vienna, after that amazing Lugano, then the loveliest Krakow, finally at the huge Antwerp event! In between I also gave some talks in my home towns Sofia –Java2Days and St. Petersburg – Joker! Let’s not forget that I was in the dreamteam of 5 to organize and run the coolest jPrime conf!

Quite an intense year!!!

So finally this weekend I had some time just to play. Of course play with coding!

As may be some of you saw, I’m really interested in the Nashorn engine to run JavaScript on JVM. I gave once some talks about it!

But not only this bothers me! For several years I’m interested in General Purpose computations on Video Cards (GPGPU)! I even gave some introductory talks on how to use GPU and Java together.

On Devoxx in English:

https://www.youtube.com/watch?v=BjdYRtL6qjg

And on Joker in Russian will be hopefully available soon 🙂

But! What will happen if actually unite these two my passions? How to run some code on the GPU but submitted from.. JavaScript!!!

You will say just use WebCL! But, isn’t it a little bit not ready yet? And as far as I know it is not exactly a full binding to OpenCL but mainly browser oriented stuff. Please correct me if I’m wrong. I’ve even played with some experimental drafts.

What if I want to utilize the full control of our computations on GPU through JS and make it very dynamic?

Yes, yes!! Nashorn is here to help us! And yes, we have the full power of the available bindings like those provided by JOCL!

So let’s do it! The usual example with vector add will be just cool enough!

That’s the JavaScript code:

I just put it all in jscl.js file.

As you in the beginning I’ve added some type definition shortcuts just to make code look more readable.

To run it in the console I just put:

JJS is available by default if you have minimum Java 8 installed.

The JOCL jar you can just get from their site.

And what we have (on my Mac):

Isn’t this lovely!!

Funny fact that is that because of the memory latency it about 700 ms no matter if I make 1k computations or 100m of them 🙂

Still, this may be good example of how Java/JVM can be a kind of a technology glue. Having in mind that there are more than 40 languages running on JVM, all of them can benefit from the de facto standard libraries already available for Java.

This is so much cool!! Have Fun!

Thoughts on Enterprise

I’ve always considered myself a Spring guy. I mean, it’s so cool to code with Spring! Spring is quite honestly the essence of the modern Java enterprise! It has everything for building an amazing, modern, scalable web application. The most of the routines are highly automated there. Just with few annotations you can convert some crappy POJOa into a real world working webapp. And it will work, it will scale, it will be testable! At some moment it came to a point that it’s not even required to implement some methods. Just annotate a method and Spring will read the method signature and implement the method for you! I believe that in the future you will just write something like:

You may lough, but just go to http://start.spring.io/ and it’s quite near to the idea above. Btw, it will be a cool idea to create a DSL to “talk to Spring”. I always liked and still like coding with Spring!

And then… suddenly, a disaster happened!

I had to start to work on a pure Java EE project! Oh no!

My first reaction was like: well, at least I will be paid for that.

At that period Java EE was considered as something big, outdated, sluggish.. at some parts even retro, with all the negative in this word. And my reaction was predictable since I had some “nice” experience with EJB 2. Yes, yes – EJB 2.0! Remember all this remote interfaces, local interfaces etc, etc. But after that I never touched it again. Good for me!

From several conferences I came to a feeling that Java EE just copies everything from Spring, and renames some of the annotations. And I said to myself – actually, that’s a huge plus for me! I just need to write @Inject instead of @Autowire.

So I’ve started the project and what was my first surprise? Of course it came out of maven. The pom file was really small.

And not more than 3-4 other small dependencies like Primefaces. And that’s ok! You have everything on board. And you don’t have to care about every API you use and every realization of that API etc etc. Unless you want to spoil everything, or, in other words, optimize.

And all of these ugly EJB 2.0 stuff is no more there. Everything is very nice and pure! Again with some annotations you convert your POJOs into beautiful enterprise webapp! Java EE is AWESOME!

So everything is standardized. And that rulez everything. In theory I don’t have to think about on what platform my WAR runs, it should demonstrate the same output every time I call something from my app. Realizing this was like OMG! So now we can develop one app and not think much of what dependencies to include. Take the app and drop it on every server supporting the current profile of Java EE. And we can make almost the same stuff as Spring does. Or I would say the really essential part of it with not so much sugar. Exactly – standards are the true essence of the common functionality widely used in the industry at its best! Damn! I’m in love! From now on I’m a Java EE definite fan! And always will be!

And I have discovered the power of the @EJB annotation!

So I’ve started digging in and then I faced some real life.

The first thing that blocked me was the … you are right – configuration! Every server is configured in its own way!!! So I can’t just drop my app and and make it work, I have to configure some datasources, security and many other stuff. Oh.. so now I have to ship my app with bunch of configs… a little bit disappointing. Still, I don’t have to ship my app with tons of custom libraries, or even with the server itself where it is all set up.

Then some other issues came like one with JAAS. So JBoss/Wildfly and TomEE they both support it but in some slightly different ways.. so in my code I had to put some ifs and see on which platform does the app actually run and call appropriate custom method.

Then the other issue came. The Primefaces upload worked gracefully on TomEE and didn’t even show up on Wildfly. With some insane magic I found out that if I remove the upload filter in web.xml the situation will switch to the opposite – it will work on Wildfly and stop working on TomEE. And that was predictable since at that point they used different versions of Mojarra..

So.. In theory I had to ship different web.xml files. But, that’s so much NOT beautiful! I spent one night almost in tears as I was unable to find a nice solution. But after some beer an in genius solution came to my head – create a server “on start listener” and fix all missing “on the fly”. That was so brilliant that the next night I didn’t sleep as well since I was drunk in the bar. I couldn’t withstand and put some other server depended config there. But that were just two other small things.

And what about the performance and the resources consumption? Surprisingly, the average memory consumption of this app written with Java EE 7. I was expecting something like several gigs, but it was only about 250 Mb RAM on TomEE with about 100 users connected. Not bad at all!!! I have even packed it as single executable jar with TomEE and uploaded on some of the cheap cloud providers. It performed great! Beautiful!

Btw. This app that I’ve written had to make some of Batch activities. And guess what? I didn’t even have to put any additional dependency in my maven or reinvent the wheel, since JBatch was already there in that Java EE provided dependency. I just used it! That’s so cool!

So Java EE can rule them all! In some cases, definitely yes, especially when your tasks are standard enough. And I may say, that from my decade in this branch the most of the tasks are quite standard. With really few deviations.

And what about Spring? With Spring you can do the same and sometimes more (and I wonder if it can do less). Yes, there are some differences in the “philosophy”, but it is obvious that many of the coolest ideas now available in Java EE actually came from Spring, or were deeply influenced by Spring. If there is a request to build something with very narrow Time-To-Market frame, I think I’ll first think about Spring, especially Spring Boot. There I can put some piece of code into real world production within just two hours. Although this can easily be done with TomEE or Payara as well. There are so many things available out of the box, like security, data, MVC, even batches! Batch is one of the examples how a Spring solution inspired the creation of a JSR-352. Btw now it’s the opposite – Spring Batch conforms the standard. It already supports the JBatch job declaration.

So they both do the same? Here is one small thing, rarely discussed. Spring is a proprietor solution. It’s a full ecosystem driven by one company, with a lot of open source efforts included. Still the evolution decision process is a bit different there.

So isn’t Oracle doing the same? I personally think that it has tried to do the same, but they couldn’t. Actually the Java EE standard evolution and support is driven by the Java Community Process. And unlike Spring Java EE is a standard, not a product. At the moment a discussion about Java EE end of life emerged the community and EE vendors started a campaign against those predictions. The well-known Java EE Guardians were formed, with a great support from the community and James Gosling himself. Four big Java EE middleware vendors have even started a common effort for creation of a fraction of the Java EE suitable for optimal microservices operation called Microprofile.io. And its fully community driven and open. The guys standing behind this initiative are really amazing! Truly passionate professionals that do care!

So Spring and Java EE are both comparable and incomparable. And I believe they both serve their needs, although in many areas they are overlapping.

I prefer even not to compare them but there is one thing we developers quite often forget – the financial part of the story. As written in the Adam Bien’s post Spring has its very clear support policy – upgrade or pay for the support. This once played some bad story for us. We have started the project with one version of Spring and somewhere in the middle a new version of it was released with some of the interfaces deprecated. This caused us some delays as we were dealing with updating our code. Still their corporate support is really awesome.

In terms of Java EE, it kind of moves this question to hierarchically lower level – to vendors that do implement this standard has their own support strategies. But the good thing is that they have to correspond to the standard, and the standard does not change too often. On the other side not all the issues can be solved entirely by some standard portable code. At some moment it comes to the point of “no turning away” when it’s necessary to switch to some, for example, underlying API provider specific optimizations. And the portability of the code declines drastically. But you still are not fully tied to one and only proprietor support, and switching it is much less painful. Btw. this question about equalizing of some of the parameters arose at the Microprofile.io multivendor session on the latest Devoxx.

So it’s a tricky question what is better. The answer is though very easy: well, it depends! It’s not always about the technology. A clear knowledge of how your app will perform and evolve can have more impact than just choosing the latest and the trendiest technology.

But if we take the just the technology itself, in my humble opinion Java EE and Spring are doing a great synergy together and move the Enterprise Java forward!

Fetching all the data in JPA?

I’m really not much into blogging recently, but I couldn’t not share this)

ORM is cool. No, it’s really cool! In Java EE that’s the standard for data persisting. It is so cool that by using it people start to forget that there is a database underneath the wonderful and extremely beautiful object model that gracefully expresses the business needs of the app. People don’t have to think about all of this chemistry happening under the curtains. Yes, in RDBMs the data lives in a totally different world, there are absolutely different rules, and the mathematics of it is absolutely different, but who cares! Yes, it is so much automatic! You don’t have to care about the low level ResultSets, mapping the data, their relations, etc etc… And the modern ORMs are so much smart that the data is loaded lazily and only when its need, making the app so much optimized.

And the app works perfectly while it’s been developed. Even when its accepted it works beautiful and as required.

But suddenly something goes wrong. Unexpectedly there are more than one users connected to the app. How could this happen? And the server treacherously crashes with some OutOfMemory exceptions! OMG!

No problem! That can be easily fixed – just by adding some more RAM. For several week its working wonderful. But then the OutOfMemory exception is back! No, that can’t be possible!

That’s the bloody Java EE! It’s very heavy and bad! Should have used Spring instead! Just because it’s cool! Ah.. the app should be rewritten! And of course there is no budget for that!

But still this production issue has to be fixed. And some more RAM is being added jut to keep it alive. Several iterations like that lead back to the same situation. Then, to keep the app running, the cleverest decision is made: a limit of users per time is put on the app. And yay! The scalability problem is solved! Solved, but not exactly, after another period of time the app crashes with the same exception even with one user! 

Common story? I believe that yes. But it’s not much popular to talk about issues like that, since we are all cool programmers, and never make such mistakes.

This story happened to me. But luckily I was not the one to code this app. On my freelance period I was hired as a consultant to solve this issue on a live relatively (not so much) legacy system.

Imagine you have quite an ordinary Java EE webapp, with JSF in front, some business logic expressed with mostly stateless beans and hibernate underneath. Nothing special.

The user logs in and navigates to a certain page which should display a table with some information. One of the columns has links to some downloadable items.

So I’ve logged in, clicked on the required link… and got the error. It was so easy to reproduce the bug. So the first action was to see the Heap Dump with MAT.. and the results were terrifying! A list with several thousand objects having a string field with a several megs of XML inside!!! That’s a WOW!

The JSF itself is designed in the simplest way as well, all the several thousands of items are displayed on one page and the user has to be able to download the XML directly from the object from the table. And you understand it yourself: that’s so wrong! With 10 test items during the development process this approach worked perfectly, and it’s hard to see what it leads to. Yep, for a developer and XML is just a string, which can be a member of an object. What can go wrong…

So imagine the following approximation:

1

Two very easy classes.

Parent:

And Child:

There is a bidirectional OneToMany relation between them:
2

The field names are self-explanatory. Everything which is important, very important and shown has to be visualized in front to the user in the JSF. The other stuff – not on this page.

Looks ok for now… but have you spotted the problem?

Of course its the hugeXMLPayload that should always be loaded, although never displayed…

But, haha, this is so easy to fix! With just few annotations like:

At some point you are right. This should work if the code instrumentation/enhancement is supported and enabled in the JPA persistence provider.

But let’s rethink the way CLOBs and BLOBs are stored. Should this be directly in an object field which is then mapped to some table column? I don’t think that’s a good idea. Big data should be accessed rarely and the access should be really targeted. You may say: but, I have annotated them, and according to StackOverflow the code should be instrumented and, all this data should be loaded lazily. But just annotating them the correct way will not help in Hibernate versions lower then 5, as this code instrumentation was not properly ready yet. As seen, that is just a usual String getter method on an object field, this is not even a relationship which could be expressed with a join underneath.

This didn’t work for us.

So the first quick fix that was applied is naturally dynamic pagination in the JSF table. Thus in memory we had only a small subset of the data. Still loading XML CLOBS in memory every time the page is loaded is a little bit stupid.

What may be the other option effectively NOT to load the entire data? And without changing the data model.. and on a live system? It is not possible to create another “Attachment” object and make a OneToOne relationship to the Child to keep it separated.

So how to make it the most natural and portable way?

The answer I have found in the book “High-Performance Java Persistence” by Vlad Mihalcea. I believe this is an absolutely “A Must Read Book” for every full stack developer.

So the answer to our problem was unexpectedly simple. The first idea was to create a projection and to fetch only the desired fields. But the results of those fetches are arrays with objects. And that’s a little bit raw level approach. There should be something more civilized.

And the civilized approach is called subentities. In JPA it is absolutely legal to map different objects to the same table in the DB. These objects may contain a subset of the data of the original object. And JPA/Hibernate will always fetch only the fields relevant to the object. So we have created a parallel read-only structure with subentities and made just one additional service to serve our JSF.

ParentEntity transformed to ParentEntitySummary:

And the ChildEntity transformed to ChildEntitySummary:

We have just thrown away all unnecessary and left what’s important! So good! If the JPA provider is Hibernate we could add an @Immutable annotation to declare our objects as read-only.

The magic here is in the @Table annotation. Its easy to spot that two classes are actually pointing to the same table in the DB.

For Parent it is:

For Child it is:

And we can absolutely legally to work with them as with all other entities:

So much awesome! And it worked!

This of course is not a silver bullet. I would rather call it a good patching.

So if it is impossible to redesign the DB scheme and have to work with live data, this approach may be a nice escape, especially on a read-only data.

As a result, from 8 GB RAM consumption  we went only to 89 MB on a JBoss machine (server memory footprint included). And its obvious that Java EE is very lightweight and quick!

So the idea of this blog post is to remind that developer is not the only user of the system. And “works on my machine” doesn’t mean it production ready!

And ones again the way the RDBM represents data is different from the way objects do. JPA/Hibernate is awesome tool. It saves great efforts! But its is necessary to make correct decisions on how the data is stored!

Have fun with JPA!

Some tips on TomEE development environment setup

I was recently interested to take a look what’s inside the TomEE app server.

We’re using it “heavily” in production. I was interested how some stuff was done.

The great stuff what it’s just a usual maven project. Actually all the info can be found here – http://tomee.apache.org/dev/source-code.html but step by step instruction may be of a help.

So I’ve started from getting the source with executing
git clone https://git-wip-us.apache.org/repos/asf/tomee.git tomee

Before importing the project into IDE a very good idea is to make a full build by calling

mvn -Dsurefire.useFile=false -DdisableXmlReport=true -DuniqueVersion=false -ff -Dassemble -DskipTests -DfailIfNoTests=false clean install

It will take a while, it will download all the dependencies and build TomEE without executing all the tests. Something that pleased me a lot is there is no need for any other special setup.

The integration with IDE is also quite seamless.

In IntelliJ Idea for example Just import the project from POM:

1

Then check ALL:

2

and in about 30 min you will get fully indexed working environment:

3

For Eclipse its almost the same. First import existing maven project into workspace:

14 марта 2016 г. 09-36-52

Eclipse finally supports hierarchical maven projects:

14 марта 2016 г. 09-38-04

And we are done!

11

A good idea may be to select hierarchical project representation of the project 🙂

The TomEE team has done a great job to make the development setup extremely simple! Within just few action you can start developing for TomEE.

KUDOS to the team!

 

Nashorn Eclipse Development Environment Setup

Yo fellows!

I recently had some time, so I’ve played with Eclipse to set it up for Nashorn. This article is adopted from the previous one regarding Intllij Idea.

The current setup is made on OS X Yosemite and the latest Eclipse (Luna). But the setup for other OSs should run almost the same way.

1. Verify you have JDK8 installed.

2. Using mercurial clone the repository http://hg.openjdk.java.net/jdk9/dev to the folder you want and execute get_sources.sh

3. Now switch to Eclipse and create a project called Nashorn directly in the folder “<JDK9_SOURCES>/nashorn”

17 июня 2015 г., 11:09:56

4. Eclipse is quite clever to find source folders:

17 июня 2015 г., 11:35:25

 

5. A bit tricky part: The compilation, development and debugging currently is done against JDK8 since the IDE still does not support JDK9 and the jimage distribution mechanism. So, in order JDK’s Nashorn not to interfere with one we build it’s a good idea to make a new copy of JDK8 and to remove the nashorn.jar from the JDK located under <JDK8_ROOT>/jre/lib/ext/nashorn.jar:

1

 

 

6. Now add this JDK to the IDE:

17 июня 2015 г., 11:39:25

7.  Almost done. Another tricky part: In Nashorn the so called “JavaScript” classes are been generated. There is a special tool “nasgen” for that and it is locates in the “buildtools/nasgen” directory.

17 июня 2015 г., 11:52:04

 

8. Before running the Nashorn itself the nasgen “all” ant target should be run.

9. Add the resulting “nasgen.jar” to the “Build Path” if it’s not there already.

17 июня 2015 г., 11:54:16

10. Navigate to “<nashorn>/make” folder and run the “all” ant target.

11. Add the resulting classed in “<nashorn>/build/classes/” to the “Build Path”:

17 июня 2015 г., 11:59:31

 

12. Now it is possible to run the Shell.java to explore and debug the code:

17 июня 2015 г., 12:02:24

 

Debugging is available directly in the IDE :

17 июня 2015 г., 12:09:38

 

Warning: Have in mind that some of classes – so called “JavaScript” classes are been generated. Their “bootstrapping” classes are annotated with @ScriptObject. Take some time to explore them. They cannot be debugged from that perspective. But “System.out.println” might help :)

Have fun developing the Nashorn!

Nashorn IntelliJ Idea development environment setup

Yo fellows!

As a part of the Bulgarian JUG I’m interested in contribution to the Nashorn Project!

In this post I’ll try describe how to setup the IntelliJ based development environment for Nashorn.

The current setup is made on OS X Yosemite and IntelliJ Idea version 14. But the setup for other OSs should run almost the same way.

1. Verify you have JDK8 installed.

2. Using mercurial clone the repository http://hg.openjdk.java.net/jdk9/dev to the folder you want and execute get_sources.sh

3. Create an empty Java project somewhere in your system but NOT in the folder where the pulled JDK9 sources are.

4. Make a project module with root “<JDK9_SOURCES>/nashorn”, and assign sources to” src/jdk/scripting/nashorn/share/classes”

1

5. A bit tricky part: The compilation, development and debugging currently is done against JDK8 since the IDE Idea 14 does not support JDK9 and the jimage distribution mechanism. So, in order JDK’s Nashorn not to interfere with one we build it’s a good idea to make a new copy of JDK8 and to remove the nashorn.jar from the JDK located under <JDK8_ROOT>/jre/lib/ext/nashorn.jar:

1

6. Now add this JDK to the IDE:

1

 

7.  Almost done. Another tricky part: In Nashorn the so called “JavaScript” classes are been generated. There is a special tool “nasgen” for that and it is locates in the “buildtools/nasgen” directory.

1

 

8. Before running the Nashorn itself the nasgen “all” ant target should be run.

9. Add the resulting “nasgen.jar” to the module dependencies.

10. Navigate to “<nashorn>/make” folder and run the “all” ant target.

11. Add the resulting classed in “<nashorn>/build/classes/” to the module dependencies:

1

12. Now it is possible to run the Shell.java to explore and debug the code:

1

 

Debugging is available directly in the IDE :

1

 

Warning: Have in mind that some of classes – so called “JavaScript” classes are been generated. Their “bootstrapping” classes are annotated with @ScriptObject. Take some time to explore them. They cannot be debugged from that perspective. But “sout” might help 🙂

 

Have fun developing the Nashorn!