Tuesday, November 20, 2012

Code but not as we know it - Infrastructure as code

A rather boring session for the uninitiated, including me. Although the presentation seemed to contain a treasure of links and pointers to very useful tools, extensions, plug-ins, etc. if we would ever want to head in the "infrastructure as code" direction.


"The Future of Software Development Process Methodology Effectiveness" and "Developers - Prima Donnas of the 21st Century" were both very entertaining sessions, pushing the attendees to reflect on our profession and the way we practice it.

Using either sarcasm ("The Future of ...")  or a more direct approach ("Prima Donnas ..."), no methodology or modern hype was left unridiculed. The basic message was that very often too much focus is put on the process.

This were also the kind of sessions you would wish your manager would see ;-)

Performance Optimization Methodology

A nice hands-on session on how to approach a performance optimization problem. I did miss an overview slide or checklist on the "methodology" that was applied, but nevertheless there were a lot of practical tips and pointers to useful tools spread throughout the presentation.

To ATDD and Beyond! Better Automated Acceptance Testing on the JVM.

This session showed how far you can take automated tests. Basically, using the tool set below, user acceptance tests were (re)written in code that was still easily readable and understandable. As a result, progress could be tracked directly against those acceptance tests (implemented, unavailable, broken).

It was an impressive practical example of something that is usually deemed ideal but practically unreachable. The initial overhead is still significant though, and programming knowledge is definitely required to write the tests and the underlying constructs.

Intro to iOS 6 for Java Developers

With at least one (probably) native iOS application in our project pipeline, I thought this would be an excellent opportunity to be introduced in the world of Objective-C. Lacking almost any preexisting knowledge, I was a little surprised that the code not only looked unfamiliar from a Java developer's perspective, and that even with a C/C++ background this would take some getting used to.

The main concepts and differences are excellently explained in the syntax section on the Objective-C Wikipedia page.

It's also good to see that the language and libraries is being kept up-to-date with features like ARC (Automatic Reference Counting) to facilitate memory management.

For actual iOS development the Xcode IDE (only available for OS X) seems like an indispensable tool.

Monday, November 19, 2012

Effective dependency injection

In the introduction the presenters stated that this was a talk about how to use depency injection in general in an effective way, not targeted at a specific product. Both presenters work for google, so they chose Guice for their code examples. The majority of the talk turned out to be tips and suggestions on how to effectively use Guice. They used a number of specific api classes and mostly appended "I'm sure the other frameworks provide something similar". I've tried to translate some of their tips into a more general form.
  • Prefer explicit binding over autowiring and letting the framework decide on which implementation to inject. Especially if you write modules that can be reused.
  • Favor constructor injection over property injection: this way you get immutable classes and get the benefit of thread safety for free. The other benefit is that all your members will be initialised correctly and your object is created in a stable state.
  • Avoid doing work in constructors, keep them as simple as possible and do initialisation in a separate init method that can throw exceptions if necessary.
  • Write system tests that verify that all your dependencies are wired correctly.
  • Avoid deep inheritance graphs. Always favor composition over inheritance (which is a principle of OO programming in general).
The speaker in this talk contradicted Kirk Knoernsschild on how to modularity. He would advise to put the interfaces, the implementation and the consumer in the same package, so that you don't need to expose public implementation classes. However when you would use OSGi, you would have to compromise that because exported classes should be public there. I his oppinion there is hardly any need for things like Jiggsaw or OSGi. The only reason to use it would be when you have an architecture based on plugins like eclipse that needs runtime extensibility or when you have different modules that require different versions of the same module.

A lot of people had great expectations from this session on dependency injection. All seats were taken, all stairs were full, people even sat on the ground in the front. My curiousity kept me in the room for an hour, but the talk wasn't worth the sore back in the end.

Friday, November 16, 2012


As you may know we have issues with our cached Lucene indexes in WatchPro and I already had a look at Hazelcast a few weeks ago but Christoph told me there are some performance issues (sometimes 100% CPU) with it. Anyway I thought this session could throw some light on it. Hazelcast is not only a partitioned cache across a cluster of nodes, it also has a mechanism to execute code on the node that is containing the data. This would be a perfect match to cache our Lucene indexes.  Now each server has its own cache which is way too small to cache all of the Lucene indexes of our biggest customers. As a result the Lucene indexes are retrieved from the database over and over again which leads to poor performance especially if we need to fetch a large amount of Lucene indexes. But if we could use all the memory and disk storage of our 8 watch servers to make one big Lucene Index cache, this might actually work. I think the performance issues with Hazelcast are related to the backup mechanism. By default every cache entry has a backup entry in one of the other nodes. This is a feature we can do without and apparently you can set the backup count to zero.  We should give it a try.

Thursday, November 15, 2012


I went to see the second part of this talk. I first went to see the talk of José Paumard, but that was not my cup of tea. After the break Wesley Hales showed off some of the features of HTML 5. Using his slidfast.js learning framework, he demoed websockets for live updates by having the audience connect to a voting page of which we could see the results come in on screen in realtime. Web storage allows you to keep a limited set of data on the client side. Syntactically, there are 3 equivalent ways to use this in Javascript, but in practice you have to be very careful about which syntax you use in which browser. It turns out one notation can be up to 3 times slower than the other in Firefox, but 3 times faster in Chrome. A less usefull feature for business purposes is adding events to changes of the orientation of the users device. He showed an example that would flip pages back and forth triggered by tilting the device. Geolocation and geofencing are used to limit content delivered to the browser to content relevant for the users location. And last, he used web workers to execute some multithreaded calculations.

The final part of the talk was about web page performance. He showed an overview of experiments by Google, Yahoo and Bing of how slower performance directly impacts page usage. The same experiment was used in tuesday's talk on "Faster Websites".
He introduced us to webpagetest.org. It's an online webpage analysis that shows how the page is loaded and runs some tests to see if you're using best practices for good performance. I think the Pagespeed Insights plugin for Chrome gives more info. Running webpagetest.org against www.thomsonreuters.com results in a score of 53/100 with room for improvement in compressing content, caching static content and use of CDN.
Using CDN would increase cache reuse for static files like jquery.js by not serving it yourself, but using a public server. He did note that using CDN could imply security risks, because you have no control over what version of a file resides in the users cache.
In the end this was an entertaining talk with some nice things to keep in the back of your head.


My Devoxx takeaway this year is definitely without a doubt AngularJS! It is an awesome client-side MVC framework. It is what HTML would have been, had it been built for dynamic content. It has two way data binding, it supports templates and dependency injection,  hell, they even designed it with testability in mind. If I were to build a new web app, I would use AngularJS on the client side talking JSON with a RESTful Spring app on the server side!


This session was an introduction to Spring for Apache Hadoop. This Spring Data project doesn’t only support the core of Hadoop (HDFS and MapReduce) but also systems like Hive, Pig and Cascading. Again Spring excels in making it easy for developers to use technologies like Hadoop.

Wednesday, November 14, 2012

The Internet of Things

Devoxx tends to shift focus every year to another aspect of Java. This year I have the impression that the highlight is on Java interfacing with all kinds of hardware. It started with the wristbands which are equipped with a NFC (Near Field Communication) tag. At the entrance of every room there is an NFC device which allows you to vote for the sessions you attended (votes are stored in a MongoDB). Then there are the touchscreens that were custom build with a Raspberry Pi computer board running the JavaFX  Devoxx schedule app.  And last but not least there are the Aldebaran’s humanoid robots stealing the show especially this morning with an impressive dans act at the start of the keynotes.  There were numerous sessions today covering all these technologies including a session about TinkerForge and I attended them all. I know, it is not really our cup of tea, but I just couldn’t resist. And I wasn’t the only one. Especially the Raspberry Pi session was very popular: the room was stacked with people.

Architecture all the way

Kirk Knoernschld is a gifted, charismatic speaker. Some quotes:

Software Architecture is all about taming complexity.
Maintenance cost far outweighs project budget.
Postpone irreversible (design) decisions as much as possible.
Architecture is a continuous effort, not a a big up front task. 
Design for flexibility at the seams of a module.

Tuesday, November 13, 2012


This was a far better session although 3 hours without a coffee break (and a smoke) was really long. Big websites like Google try to meet the following rule of thumb: page loads should stay under 250 milliseconds. Page loads between 100 and 300 milliseconds are considered sluggish and if they last longer than 10 seconds, the user gives up. The speaker went a long way explaining how browsers work and how they can help us to achieve better page loads. An overview of actions that may help: reduce the number of DNS lookups, avoid redirects (like our dashboard redirect for instance), make fewer HTTP requests, use a CDN, GZIP assets, optimize images, add an Expires header, add ETags, put the CSS at the top, use async scripts (scripts that do a document.write block the DOM construction), place the scripts at the bottom, minify + concatenate.  He also showed us how useful some tools can be like PageSpeed Insights. 


The java social JSR 357 was rejected because there was no real POC. And because the APIs of the social networks (Twitter, Facebook, LinkedIn, …) tend to change pretty often, some think it is not standardizable at all.  Today some social frameworks already exist: Spring Social, DaliCore (a Belgian project), Oracle SocialLink and the one that was presented in this session, Agorava. Again the session was very technical. A lot of time was spend explaining which technologies are used (REST, JSON, OAuth, CDI) by Agorava. I really had the impression I was attending a CDI crash course instead of a session about social network APIs. The demo showed a HTTP 500 error (fixed after the coffee break) and the speaker had a terrible French accent. Not sure that Agorava is the best POC for the next social JSR.

CRaSH - The shell for the Java platform

A cool and promising java powertool:


Highly extensible through custom groovy scripts!

Monday, November 12, 2012


José Paumard thinks we are on the brink of the next big programming evolution. With C++ we no longer needed to maintain stacks. With Java we no longer had to maintain memory. And soon the parallel method of Java 8 will significantly ease the path to multithreaded programming. This session was very technical. Luckily now and then there were some very interesting aspects that revived our attention.


In this presentation Kirk Knoernsschild talked about the importance of developing modular software. Besides reusability one of the main benefits he emphasized was the maintainability of the code.  Just ask yourself how much time you spend on just trying to understand existing old code (sometimes your own) prior to fixing or adapting it. Software following the layered (Data Access, Domain, UI) paradigm is not really modular if each layer does not result in a separate jar. You should for instance be able to deploy just the Data Access and Domain layer to be able to do some batch processing for instance that doesn’t require user interaction. Another thing he pointed out was that refactoring existing monolithic systems into a modular system can be painstakingly hard. The fact that the release of Jigsaw has been delayed to 2015 is all the more evidence that this isn’t trivial at all. The best refactoring plan is to start with the layers. After the coffee break he showed how easy it is to deploy modular software in an OSGi container. OSGi is far less intrusive as I thought. It only requires some entries in the manifest file and in the Spring configuration files. All though it was quite impressive how easy it is to redeploy just one jar instead of the whole war, I don’t think OSGi is something we will use in the near future. Modularity on the other hand is really something we should pursue.