Monday, December 30, 2013

Groovy's Smooth Operators

Take a trip back to 1984.  Apple release the Macintosh, 'The Final Battle' is about to commence in V and Scotland win the Five Nations completing a grand slam in the process.  Right in the middle of miners' strike in the UK, English pop group Sade release the catchy number: Smooth Operator.  It was chart success in the UK and in the US (not to mention the German, Dutch and Austrian charts). Little did Sade know that decades later the Groovy programming language would feature several smooth operators that go above and beyond the standard set of Java operators.  Now, we'll leave the 1980's comparisons there and shall discuss a few of the smooth operators in Groovy in this blog post.


The Elvis Operator (so named after a certain person's hairstyle) is a shortened version of the ternary operator.  It is very handy for null checking. And remember even though it can be argued null handling is bad coding practise because it is better to not have nulls in your code, lots of APIs in a code base will inevitably return nulls and in the real world where you want to add some robustness and defensive coding to keep things smooth in production, this kind of thing can be useful.

Now here is how it works.  If the expression to the left of the Elvis operator evaluates to false (remember in Groovy there are several things such as null, such as an empty String which can be coerced to mean false), the result of the expression will be the whatever is on the right.
String country = ?: "Unknown country"
So, in the above example if is null or just "" since both are false in Groovy, country will be assigned as "Unknown country".   If was not false, country would just be assigned that value.

Null-Safe Dereference (also called the Safe Navigation operator) 

This is especially useful to avoid null pointer exceptions in a chained expression. For example,
String location = map.getLocation().getXandYCoordinates();
might yield a null pointer exception. But using the safe navigation operator if either map, or the result of map.getLocation() are null
String location = map?.getLocation()?.getXandYCoordinates(); 
location will just be set as null. So, no null pointer exception is thrown.


The Spread operator is useful when executing a method on elements of a collection and collecting the result. In this example, I parse a bunch of Strings into arrays and then run a closure on each of the arrays to get all the first elements.
def names = ["john magoo","peter murphy"]
def namesAsArrays = names*.split(" ")
namesAsArrays.each(){print it[0]} 
Now, remember the Spread operator can only be used on methods and on elements. It can't execute a closure for example. But what we can do is leverage Groovy's meta programming capabilities to create a method and then use the spread operator to invoke that. Suppose we had a list of numbers that we wanted to add 50 too and then multiply the answer by 6 we could do:
def numbers = [43,4]
java.lang.Integer.metaClass.remixIt = {(delegate + 50) * 6}

Spaceship operator 

This originated in the Perl programming language and is so called because you guessed it, <=> looks like a spaceship!  So how does it work?  Well, it is like this.  The expression on the left and the right of the spaceship operator are both evaluated.   -1 will be returned if the operand on left is smaller, 0 if the left and right are equal and 1 if the left is larger. It's power should become more obvious with an example.

Say we have a class to represent software engineers.
class SoftwareEngineer {
    String name
    int age
    String toString() { "($name,$age)" }

def engineers = [
    new SoftwareEngineer(name: "Martin Fowler", age: 50),
    new SoftwareEngineer(name: "Roy Fielding", age: 48),
    new SoftwareEngineer(name: "James Gosling", age: 58)
Now we could easily sort by age
But what about when our list of Engineers grows and grows and we have multiple engineers of the same age? In these scenarios it would be nice to sort by age first and name second. We could achieve this by doing:
engineers.sort{a, b -> 
   a.age <=> b.age ?: <=>
This expression will try to sort engineers by age first. If their ages' are equal a.age <=> b.age will return 0. In Groovy and in the Elvis construct, 0 means false and hence <=> will then be evaluated and used to determine the sort.

Field Access 

In Groovy, when a field access is attempted, Groovy will invoke the corresponding accessor method. The Field Access operator is a mechanism to force Groovy to use field access.
class RugbyPlayer {
     String name
     String getName() {name ?: "No name"} 

assert "Tony" == new RugbyPlayer(name: "Tony").name
assert "No name" == new RugbyPlayer().name
assert null == new RugbyPlayer().@name

Method Reference 

The Method Reference operator allows you to treat a reference to a method like a closure.
class MyClass {
   void myMethod() { println "1" }
   void myMethod(String s) { println "2" }

def methodRef = new MyClass().&"myMethod"
methodRef()    // outputs 1
methodRef('x') // outputs 2
The .& creates an instance of org.codehaus.groovy.runtime.MethodClosure. This can be assigned to a variable and subsequently invoked as shown.

Happy New Year to you all and hopefully some more blogging in 2014.

Saturday, October 26, 2013

Grails: Applying build information to your builds

Occasionally, when I buy some food I check the label to see how unhealthy it is in an effort to remind myself to eat better. I probably should do this more often but that's another story.

With software, I take a more strict approach. I like to know exactly what version of what I am using and if it pertains to a build coming from a project I am working on, I like to know even more:
  • the branch it came from 
  • any code or config changes 
  • the time the build was done 
  • the person who did the build
  • the commit revision it corresponds to
The advantages of this are obvious but worth re-stating.
  • While you are in development, if you have multiple deployments and see something unusual you will immediately want to compare them. 
  • For example say you have:
    • build #349 on box 1
    • build #352 on box 2 (which includes development changes from a feature branch Y)
    You notice some strange behaviour for an error message on box 2 but don't on box 1. You can immediately check which box has what version, then check the changes and then try to rationalise the difference.
  • You can be sure your latest code check-in is deployed
  • All this build information should be in official release notes, so if you can automate it as part of your development process you have saved yourself having to automate it as part of your release
  • In enterprise architectures where you have multiple components developed from different teams integrating with each other, all components should make it easy to get this information. This helps to manage the stack and for everyone to be able to immediately check if new versions of any components have been deployed or to roll back to stable versions if problems with  any upgrades of components happen.
So away with the opinions and now with some examples. The project that I am currently working uses a Grails based architecture and Atlassian's Bamboo for builds and deploys. I wanted to get this information I described into every build - and yes that includes every development build.  I will now outline the steps.

Step 1

Write an event handler to execute when the war is being created. This should read a bunch of system properties (which you'll set in bamboo) and put them in your of your war file. Example:
eventCreateWarStart = { warName, stagingDir ->
    def buildNumber = System.getProperty( "build.number", "CUSTOM" ) 
    def buildTimeStamp= System.getProperty("build.timeStamp", "") 
    def buildUserName= System.getProperty("build.userName", "")
    def repositoryRepositoryUrl= System.getProperty("repository.repositoryUrl",  "")
    def repositoryRevisionNumber= System.getProperty("repository.revision.number", "")
    def repositoryBranch= System.getProperty("repository.branch", "")
    ant.propertyfile(file: "${stagingDir}/WEB-INF/classes/") {
        entry(key:"build.number", value:buildNumber) 
        entry(key:"build.timestamp", value: buildTimeStamp)
        entry(key:"build.userName", value: buildUserName)
        entry(key:"repository.revision.number", value: repositoryRevisionNumber)
        entry(key:"repository.branch", value: repositoryBranch) 


Step 2

Update your bamboo build to set these system properties when the war is being built.  To do this, update the build war task to something like:
war -Dbuild.number=${bamboo.buildNumber} 
This will mean when bamboo builds you war all the above properties will be put in your file.

Step 3

Now all you need to do is make it easy for a Grails GSP to read these properties and display them.  This means that when deployed all anyone will have to do is hit a special URL and then they'll get all the build info for the deployed WAR.

... Some CSS and JavaScript

Environment: ${}

Build Info:

UserName#${g.meta([name: 'build.userName'])}
Built by #${g.meta([name: 'build.number'])}
Date#${g.meta([name: ''])}
Branch#${g.meta([name: 'repository.branch'])}
Revision number#${g.meta([name: 'repository.revision.number'])}
And in the spirit of food analogies... Here's one I made earlier and how it looks

So there you go, software taken more seriously than food. I'm off to get a Big Mac!

Saturday, August 24, 2013

SQL Server tips

Recently, I was working on a project which involved SQL Server. I made a note of some useful commands that help me find useful things out. Here they are:

Check column types for a specific table:


Check column constraints


Check length of text in a varchar column


List all tables that have a column named EntityID


Check all constrains...


Find blocked processes


To see long running open transactions


Check the lock timeout


Check current session


Find creation date of databases


Get the database id for a database.


Get the median value for a column named responsetime in a table allrequests

SELECT( (SELECT MAX(responsetime) FROM (SELECT TOP 50 PERCENT responsetime FROM allrequests ORDER BY responsetime) AS BottomHalf) + (SELECT MIN(responsetime) FROM (SELECT TOP 50 PERCENT responsetime FROM allrequests ORDER BY responsetime DESC) AS TopHalf)) / 2 AS Median

Most CPU intensive queries


Get me all the tables that have a FK to the table named Person


Check all objects that have changed in last ten days

Disable indexes


Enable indexes


Check current user


Check database owner


Check last time Stats was run


Run stats for a table named Person in Sales


Check the size of various tables


Find the creation time for all tables.


Check version


or for more info


Monday, June 17, 2013

JavaScript tip: Log those important APIs and get some code coverage

Any web architecture using JavaScript will always have plenty of AJAX requests to the server. There are many, many, many different ways AJAX calls can be made. Suppose you're thinking good software engineering and you want to avoid coupling your client code to the AJAX mechanism you are using? You could use a wrapper / bridge / proxy type object which contains all the APIs to get information from the Server and encapsulates your AJAX mechanism. Following the Crockford Module pattern this could pan out something like:
dublintech.myproject.apiBridge = function() {
    function createEntity(data) {
        // actual AJAX call
    function readEntity(id) {
        // actual AJAX call
    that = {}
    that.createEntity = createEntity;
    that.readEntity = readEntity;
    return that;
This could be invoked simply as:
What are the pro's of using the wrapper approach? Well essentially the advantages are all derived from the fact there is a nice separation of concerns: the transport mechanism (doesn't even have to be ajax) is separated from the actual data requests coming from the client. This means:
  1. If you wish to change Ajax mechanism (i.e. move from one Ajax library to another), it's not a big deal. The impact is limited.
  2. It's easier to stub out and test.
  3. It's easier to achieve code reuse. Suppose you want to have consistent error handling for exceptions from Ajax requests, it's much easier to do this if all Ajax request are following the same pattern.

Tell me something more interesting.

Well we all know that you can see the AJAX requests in firebug. But say you wanted another way to log what requests had been invoked on your wrapper. In Java, you could write an Aspect and set it on every method that made an Ajax request. There is no direct equivalent to aspects in JavaScripts. But we can do something similar. Firstly, the function to do the pre and post logging:
function wrapPrePostLogic(fn) {
    return function withLogic() {
 console.log("before" +;
 var res = fn.apply(this, arguments);
 console.log("after" +;
Ok so wrapPrePostLogic takes a function, and returns a new function which wraps around the existing function and puts logging before and after the invocation of that function. Hold on sec - IE is awkward. It doesn't support getting the variable. So let's write a helper method to get function name for any browse and use that instead.
function wrapPrePostLogic(fn) {
    return function withLogic() {
        getFnName("before " + getFnName(fn));
        var res = fn.apply(this, arguments);
        getFnName("after " + getFnName(fn));

function getFnName(fn) {
    var toReturn =  ( ? : (fn.toString().match(/function (.+?)\(/)||[,''])[1]);
    return toReturn;
So now what we need to ensure wrapPrePostLogic() is called for every method. We could do:
    that = {}
    that.createEntity = wrapPrePostLogic(createEntity);
    that.readEntity = wrapPrePostLogic(readEntity);
    return that;
But I am smelling code bloat and human error. What would be nicer is to iterate over all functions returned by that and wrap them appropriately. To do that we could do...
    that = {}
    for (var prop in that) {
        if (typeof that[prop] === "function") {
            that[prop] = wrapPrePostLogic(that[prop]);

    return that;

Not bad. Anything else?

Ok, so say you are you using a UI test framework and wanted to test code coverage of all methods that send / receive data to / from the server. Instead of getting your wrapper function to log to the console you could update a hidden div and then make your test framework check this hidden div.
function updateHiddenDiv(methodName) {
    if (jQuery("#js_methods_invoked").length === 0) {
        var hiddenDiv = jQuery('
Now we can just change our wrapper function to:
function wrapPrePostLogic(fn) {
    return function withLogic() {
        var res = fn.apply(this, arguments);
        return res;
In addition, you can find what JavaScript methods have been invoked by opening a JavaScript console and wacking in:
This will return what methods have been invoked. Until the next time, take care of yourselves!

Saturday, May 18, 2013

How could Scala do a merge sort?

Merge sort is a classical "divide and conquer" sorting algorithm. You should have to never write one because you'd be silly to do that when a standard library class already will already do it for you. But, it is useful to demonstrate a few characteristics of programming techniques in Scala. Firstly a quick recap on the merge sort. It is a divide and conquer algorithm. A list of elements is split up into smaller and smaller lists. When a list has one element it is considered sorted. It is then merged with the list beside it. When there are no more lists to merged the original data set is considered sorted. Now let's take a look how to do that using an imperative approach in Java.
public void sort(int[] values) {
   int[] numbers = values;
   int[] auxillaryNumbers = new int[values.length];
   mergesort(numbers, auxillaryNumbers, 0, values.length - 1);

private void mergesort(int [] numbers, int [] auxillaryNumbers, int low, int high) {
    // Check if low is smaller then high, if not then the array is sorted
    if (low < high) {
        // Get the index of the element which is in the middle
        int middle = low + (high - low) / 2;
        // Sort the left side of the array
        mergesort(numbers, auxillaryNumbers, low, middle);
        // Sort the right side of the array
        mergesort(numbers, auxillaryNumbers, middle + 1, high);
        // Combine them both
        // Alex: the first time we hit this when there is min difference between high and low.
        merge(numbers, auxillaryNumbers, low, middle, high);

 * Merges a[low .. middle] with a[middle..high].
 * This method assumes a[low .. middle] and a[middle..high] are sorted. It returns
 * a[low..high] as an sorted array. 
private void merge(int [] a, int[] aux, int low, int middle, int high) {
    // Copy both parts into the aux array
    for (int k = low; k <= high; k++) {
        aux[k] = a[k];

    int i = low, j = middle + 1;
    for (int k = low; k <= high; k++) {
        if (i > middle)                      a[k] = aux[j++];
        else if (j > high)                   a[k] = aux[i++];
        else if (aux[j] < aux[i])            a[k] = aux[j++];
        else                                 a[k] = aux[i++];
public static void main(String args[]){
     ms.sort(new int[] {5, 3, 1, 17, 2, 8, 19, 11});

  1. An auxillary array is used to achieve the sort. Elements to be sorted are copied into it and then once sorted copied back. It is important this array is only created once otherwise there can be a performance hit from extensive array created. The merge method does not have to create an auxiliary array however since it changes an object it means the merge method has side effects.
  2. Merge sort big(O) performance is N log N.
Now let's have a go at a Scala solution.
  def mergeSort(xs: List[Int]): List[Int] = {
    val n = xs.length / 2
    if (n == 0) xs
    else {
      def merge(xs: List[Int], ys: List[Int]): List[Int] =
          (xs, ys) match {
          case(Nil, ys) => ys
          case(xs, Nil) => xs
          case(x :: xs1, y :: ys1) =>
            if (x < y) x::merge(xs1, ys)
            else y :: merge(xs, ys1)
      val (left, right) = xs splitAt(n)
      merge(mergeSort(left), mergeSort(right))
Key discussion points:
  1. It is the same divide and conquer idea.
  2. The splitAt function is used to divide up the data up each time into a tuple. For every recursion this will new a new tuple.
  3. The local function merge is then used to perform the merging. Local functions are a useful feature as they help promote encapsulation and prevent code bloat.
  4. Neiher the mergeSort() or merge() functions have any side effects. They don't change any object. They create (and throw away) objects.
  5. Because the data is not been passed across iterations of the merging, there is no need to pass beginning and ending pointers which can get very buggy.
  6. This merge recursion uses pattern matching to great effect here. Not only is there matching for data lists but when a match happens the data lists are assigned to variables:
    • x meaning the top element in the left list
    • xs1 the rest of the left list
    • y meaning the top element in the right list
    • ys1 meaning the rest of the data in the right list
This makes it very easy to compare the top elements and to pass around the rest of the date to compare. Would the iterative approach be possible in Java? Of course. But it would be much more complex. You don't have any pattern matching and you don't get a nudge to declare objects as immutable as Scala does with making you make something val or var. In Java, it would always be easier to read the code for this problem if it was done in an imperative style where objects are being changed across iterations of a loop. But Scala a functional recursive approach can be quite neat. So here we see an example of how Scala makes it easier to achieve good, clean, concise recursion and a make a functional approach much more possible.

Friday, May 17, 2013

Coursera's Scala course

Coursera run an excellent Scala course which I just had the opportunity of participating in. The course duration is seven weeks. Each week consists of about 1.5 hours of lectures and then an assignment which can take anything between an hour to about 5 hours.  The course syllabus  is outlined here.  So personal opinion time...

Was it worth it?  Absolutely.  Unless you are a complete pro in Scala and Functional Programming you will learn something from this course - most importantly a deeper understanding of the FP paradigm.  

I remember many eons ago when I first started learning OO, like many noobs I thought I understood OO when I understood polymorphism, inheritance, encapsulation and could name check a few design patterns.  It took me a while to really realise the difference between good and bad abstractions and how dependencies that look benign can drastically increase software entropy in a project.  Similarly many people might approach FP thinking it is just all about function literals,  2nd order functions, inner functions and closures.   Well the first important point to make about this course is that it does a super job of emphasising the importance of smart and clever usage of recursion in FP.  This was not apparent to me before the course.  The reason why recursion is a big deal in FP is of course because immutable state is a big deal in FP. That is easier to achieve when you pass data between iterations as in recursion than an imperative style loop which can usually mean some object(s) is being changed across iterations. 

Now, I hope that made some sense. Because the real brain tease is when you are given a problem that you could do with one arm tied behind your back using a for loop and told to do it with recursion.  It takes a lot of practise to get really good at recursion and it is something I still have to practise more myself but the course really made me think about it much much much more than I ever did previously.

So what else did I learn?
  1. Buzz words - exact difference between function application and function type
  2. Left association for function application and right association for function type.
  3. Passing functions around anonymously - you should only be rarely using def for functions that are being passed
  4. The Art of DRY (Don’t repeat yourself) in FP.  Functions should always be short and if it makes sense to abstract out common parts do so
  5. Difference between val, lazy val, def evaluation times (evaluated once, evaluated lazily and evaluated every time respectively
  6. The power of pattern matching, especially when using it with recursion
  7. Streams – lazy lists and the memorization potential
  8. It is extremely difficult doing a Scala course when you have two very young children.
Overall a great course. I hope to elaborate on some of  ideas and topics in future posts.

Sunday, April 21, 2013

Book Review: Beginning Scala (David Pollak)

Firstly, sorry it has been so long since last blog post.  Life is busy when you have two children.  I decided to try and learn Scala in 2013 and I am currently still pluggin' away.  This blog post is a review of David Pollack's Beginning Scala.

David Pollack has been writing software since 1977.  He wrote diagnostic software for the Commodore 64, the first real-time spreadsheet and founded the Lift Web Framework in 2007.   He describes his experience with Scala as an epiphany that changed the way he approach software; he certainly writes with enthusiasm and passion and in this book we even get a forward from the Godfather himself Martin Odersky.

'Beginning Scala' focusses very much on the core fundamentals of Scala.  The book in length is just under 300 pages. This is quite short in comparison to say Martin Odersky's excellent 'Programming in Scala' which is well over twice the length.  Where I see the sweet spot for Beginning Scala is for someone who wants to dip their toe into the water and is curious about the language feels and wants to get a good overview of the language quickly.  For more a detailed and substantial take on the language something like Odersky's book is necessary.

Areas covered include:
  • Scala traits and Scala's type system
  • Call-by-name
  • Scala collections and their immutable nature
  • Functional characteristics (passing functions, returning functions)
  • Pattern matching
  • Actors and concurrency
There are also some tips on how to introduce Scala to your team and some best practise advice.  My favourite parts were the explanation of Scala traits and the demonstration of the classic GoF Visitor Pattern made so simple using pattern matching (I'll cover this in a separate post). The one criticism I'd have is that I think what really sets Scala apart from Java is not that has lots of syntactic sugar but that it offers a functional programming approach.  This doesn't just mean you can pass functions to function, return them functions and so on but that you have to approach problems in a very different way.  In functional programming recursion is favoured over iteration.   There are many problems that developers could solve using iteration with one arm tied behind their back but to solve them using recursion is trickier.  One of Scala's features is that it facilitates both approaches but I think if you are going to embrace the functional paradigm properly you need to ditch iteration and embrace recursion.  This isn't really covered in the book - in fairness it is not really covered in detail Odersky's book either. However, if you look at the Scala course on coursera it is a massive massive massive massive massive massive massive massive part of it.

So overall, a very good book and well worth a dabble for someone that wants a dabble in Scala but if you want more you'll need more.

Sunday, January 27, 2013

Scala pattern matching: A Case for new thinking?

A new thinking?
The 16th President of the United States. Abraham Lincoln once said: "As our case is new we must think and act anew".   In software engineering things probably aren't as dramatic as civil wars and abolishing slavery but we have interesting logical concepts concerning "case". In Java the case statement provides for some limited conditional branching.  In Scala, it is possible to construct some very sophisticated pattern matching logic using the case / match construct which doesn't just bring new possibilities but a new type of thinking to realise new possibilities.

Let's start with a classical 1st year Computer Science homework assignment: a fibonacci series that doesn't start with 0, 1 but that starts with 1, 1.   So the series will look like: 1, 1, 2, 3, 5, 8, 13, ... every number is the sum of the previous two.

In Java, we could do:
public int fibonacci(int i) {
    if (i < 0) 
        return 0;
    switch(i) {
        case 0:
            return 1;
        case 1:
            return 1;
            return fibonacci(i-1) + fibonacci(i - 2);
All straight forward. If 0 is passed in it counts as the first element in the series so 1 should be returned. Note: to add some more spice to the party and make things a little bit more interesting I added a little bit of logic to return 0 if a negative number is passed in to our fibonacci method.

In Scala to achieve the same behaviour we would do:
def fibonacci(in: Int): Int = {
  in match {
    case n if n <= 0 => 0
    case 0 | 1 => 1
    case n => fibonacci(n - 1) + fibonacci(n- 2)
Key points:
  • The return type of the recursive method fibonacci is an Int. Recursive methods must explictly specify the return type (see: Odersky - Programming in Scala - Chapter 2).
  • It is possible to test for multiple values on the one line using the | notation. I do this to return a 1 for both 0 and 1 on line 4 of the example.
  • There is no need for multiple return statements. In Java you must use multiple return statements or multiple break statements.
  • Pattern matching is an expression which always returns something.
  • In this example, I employ a guard to check for a negative number and if it a number is negative zero is returned.
  • In Scala it is also possible to check across different types. It is also possible to use the wildcard _ notation. We didn't use either in the fibonacci, but just to illustrate these features...
    def multitypes(in: Any): String = in match {
      case i:Int => "You are an int!"
      case "Alex" => "You must be Alex"
      case s:String => "I don't know who you are but I know you are a String"
      case _ => "I haven't a clue who you are"
Pattern matching can be used with Scala Maps to useful effect.  Suppose we have a Map to capture who we think should be playing in each position of the Lions backline for the Lions series in Austrailia.  The keys of the map will be the position in the back line and the corresponding value will be the player who we think should be playing there.  To represent a Rugby player we use a case class. Now now you Java Heads, think of the case class as an immutable POJO written in extremely concise way - they can be mutable too but for now think immutable.
case class RugbyPlayer(name: String, country: String);
val robKearney = RugbyPlayer("Rob Kearney", "Ireland");
val georgeNorth = RugbyPlayer("George North", "Wales");
val brianODriscol = RugbyPlayer("Brian O'Driscol", "Ireland");
val jonnySexton = RugbyPlayer("Jonny Sexton", "Ireland");  
val benYoungs = RugbyPlayer("Ben Youngs", "England");
// build a map
val lionsPlayers = Map("FullBack" -> robKearney, "RightWing" -> georgeNorth, 
      "OutsideCentre" -> brianODriscol, "Outhalf" -> jonnySexton, "Scrumhalf" -> benYoungs);
// Note: Unlike Java HashMaps Scala Maps can return nulls. This achieved by returing
// an Option which can either be Some or None. 
// So, if we ask for something that exists in the Map like below
// Outputs: Some(RugbyPlayer(Jonny Sexton,Ireland))
// If we ask for something that is not in the Map yet like below
// Outputs: None
In this example we have players for every position except inside centre - which we can't make up our mind about.  Scala Maps are allowed to store nulls as values.  Now in our case we don't actually store a null for inside center. So, instead of null being returned for inside centre (as what would happen if we were using a Java HashMap), the type None is returned.

For the other positions in the back line, we have matching values and the type Some is returned which wraps around the corresponding RugbyPlayer. (Note: both Some and None extend from Option).

We can write a function which pattern matches on the returned value from the HashMap and returns us something a little more user friendly.
def show(x: Option[RugbyPlayer]) = x match {
  case Some(rugbyPlayerExt) =>  // If a rugby player is matched return its name
  case None => "Not decided yet ?" // 
println(show(lionsPlayers.get("Outhalf")))  // outputs: Jonny Sexton
println(show(lionsPlayers.get("InsideCentre"))) // Outputs: Not decided yet
This example doesn't just illustrate pattern matching but another concept known as extraction. The rugby player when matched is extracted and assigned to the rugbyPlayerExt.  We can then return the value of the rugby player's name by getting it from rugbyPlayerExt.  In fact, we can also add a guard and change around some logic. Suppose we had a biased journalist (Stephen Jones) who didn't want any Irish players in the team. He could implement his own biased function to check for Irish players
def biasedShow(x: Option[RugbyPlayer]) = x match {
  case Some(rugbyPlayerExt) if == "Ireland" => + ", don't pick him."
  case Some(rugbyPlayerExt) =>
  case None => "Not decided yet ?"
println(biasedShow(lionsPlayers.get("Outhalf"))) // Outputs Jonny... don't pick him
println(biasedShow(lionsPlayers.get("Scrumhalf"))) // Outputs Ben Youngs

Pattern matching Collections

Scala also provides some powerful pattern matching features for Collections. Here's a trivial exampe for getting the length of a list.
def length[A](list : List[A]) : Int = list match {
  case _ :: tail => 1 + length(tail)
  case Nil => 0
And suppose we want to parse arguments from a tuple...
  def parseArgument(arg : String, value: Any) = (arg, value) match {
    case ("-l", lang) => setLanguage(lang)  
    case ("-o" | "--optim", n : Int) if ((0 < n) && (n <= 3)) => setOptimizationLevel(n)
    case ("-h" | "--help", null) => displayHelp()
    case bad => badArgument(bad)

Single Parameter functions

Consider a list of numbers from 1 to 10. The filter method takes a single parameter function that returns true or false. The single parameter function can be applied for every element in the list and will return true or false for every element. The elements that return true will be filtered in; the elements that return false will be filtered out of the resultant list.
scala> val myList = List(1,2,3,4,5,6,7,8,9,10)
myList: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

scala> myList.filter(x => x % 2 ==1)
res13: List[Int] = List(1, 3, 5, 7, 9)
Now now now, listen up and remember this. A pattern can be passed to any method that takes a single parameter function. Instead of passing a single parameter function which always returned true or false we could have used a pattern which always returns true or false.
scala> myList.filter {
     |     case i: Int => i % 2 == 1   // odd number will return false
     |     case _ => false             // anything else will return false
     | }
res14: List[Int] = List(1, 3, 5, 7, 9)

Use it later?

Scala compiles patterns to a PartialFunction.  This means that not only can Scala pattern expressions be passed to other functions but they can also be stored for later use.
scala> val patternToUseLater = : PartialFunction[String, String] = {
     |   case "Dublin" => "Ireland"
     |   case _ => "Unknown"
What this example is saying is patternToUseLater is a partial function that takes a string and returns a string.  The last statemeent in a function is returned by default and because the case expression is a partial function it will returned as a partial function and assigned to pattenrToUseLater which of course can use it later.

Finally, Johnny Sexton is a phenomenal Rugby player and it is a shame to hear he is leaving Leinster. Obviously, with Sexton's busy schedule we can't be sure if Johnny is reading this blog but if he is, Johnny sorry to see you go we wish you all the best and hopefully will see you back one day in the Blue Jersey.

Sunday, January 20, 2013

Scala: call me by my name please?

In Java, when frameworks such as log4J became popular in Java architectures it was a common occurence to see code such as:
if (logger.isEnabledFor(Logger.INFO)) {
   // Ok to log now."ok" + "to" + "concatenate"  + "string" + "to" + "log" + "message");
It was considered best practice to always check if your logging was enabled for the appropriate level before performing any String concatenation. I even remember working on a project ten years ago (a 3G radio network configuration tool for Ericsson) where String concatenation for logging actually resulted in noticeable performance degradation.

Since then, JVMs have been optimised and Moore's Law has continued so that String concatenation isn't as much of a worry as it used to be.  Many frameworks (for example Hibernate), if you check the source code you'll see logging code where there is no check to see if logging is enabled and the string concatenation happens regardless.  However, let's pretend concatenation is a performance issue.  What we'd really like to do is remove the need for the if statements in order to stop code bloat.

The nub of the issue here is that in Java, when you call a method with parameters the values of the parameters are all calculated before the method is called. This why the if statement is needed.
simpleComputation(expensiveComputation());// In Java, the expensive computation is called first.
logger.log(Level.INFO, "Log this " + message);// In Java, the String concatenation happens first
Scala provides a mechanism where you can defer parameter evaluation.  This is called call-by-name.
def log(level: Level, message: => String) = if (logger.level.intValue >= level.intValue) logger.log(level, msg)
The => before the String types means that the String parameter is not evaluated before invocation of the log function.  Instead, there is a check to confirm the logger level value is at the appropriate value and if so the String will then evaluated. This check happens within the log function so there is no need to put the check before every invocation of it. What about that for code re-use?

Anything else?

Yes when pass-by-name is used, the parameter that is passed-by-name isn't just evaluated once but everytime it is referenced in the function it is passed to. Let's look at another example

scala> def nanoTime() = {
     |   println(">>nanoTime()")
     |   System.nanoTime // returns nanoTime
     | }
nanoTime: ()Long

scala> def printTime(time: => Long) = {    // => indicates a by name parameter
     |   println(">> printTime()")
     |   println("time= " + time)
     |   println("second time=" + time)
     |   println("third time=" + time)
     | }
printTime: (time: => Long)Unit

scala> printTime(nanoTime())
>> printTime() >>nanoTime() time= 518263321668117 >>nanoTime() second time=518263324003767 >>nanoTime() third time=518263324624587
In this example, we can see that nanoTime() isn't just executed once but everytime it is referenced in the function, printTime it is passed to.  This means it is executed three times in this function and hence we get three different times. 'Til the next time, take care of yourselves.  

Wednesday, January 16, 2013

Scala: Collections 1

This post contains some info on Scala's collections.


We want a function that will take an List of Rugby players as input and return those players names that play for Leinster and can run the 100 meters from the fastest to the slowest.


Step 1: Have a representation for a Rugby player.

Ok so it's obvious we want something like a POJO to represent a Rugby player.  This representation should have a player's name, their team and the time they can the 100 meters in.  Let's use Scala case class construct which removes the need for boiler plate code.
case class RugbyPlayerCaseClass(team: String, sprintTime100M: BigDecimal, name: String)


Step 2: Create some rugby players

val lukeFitzGerald = RugbyPlayerCaseClass("Leinster", 10.2, "Luke Fitzgerald");
val fergusMcFadden = RugbyPlayerCaseClass("Leinster", 10.1, "Fergus McFadden");
val rog = RugbyPlayerCaseClass("Leinster", 12, "Ronan O'Gara");
val tommyBowe = RugbyPlayerCaseClass("Ulster", 10.3, "Tommy Bowe");
val leoCullen = RugbyPlayerCaseClass("Leinster", 15, "Leo Cullen");

The code above should be self explanatory. The various rugby players are instantiated.  Note the inferred typing. There is no need to declare any of the rugby players as RugbyPlayers types. Instead, it is inferred.  Another thing that is interesting is the keyword val is used.  This means the reference is immutable  It is the equivalent to final in Java.


Step 3: Write the function

def filterValidPlayers(in: List[RugbyPlayerCaseClass]) = 
     in.filter( == "Leinster").sortWith(_.sprintTime100M < _.sprintTime100M).map(;

Key points regarding this function:
  • The function begins with def keyword signifying a function declartion.
  • A List of RugbyPlayerCaseClass instances are taken in as input. The List type is a Scala type.  
  • The return type is optional. In this case it is not explictly specified as it is inferred.
  • The part to the left of the = is what the function does. In this case the function invokes three difference collection operators.
    • .filter( =="Leinster)  - this iterates over every element in the List. In each iteration the _ is filled in with the current value in the List. If the team property of the current Rugby player is Leinster the element is included in the resulting collection.
    • .sortWith(_.sprintTime100M < _.sprintTime100M) - sortWith is a special method which we can use to sort collections. In this case, we our sorting the output fromthe previous collection operator and we are sorting based on the sprintTime for 100M.
    • .map( - this maps every element from the output of the sort operator to just ther name property.
  • The function body does not need to be surrounded by {} because it is only one line code.
  • There is no return statement needed. In Scala, whatever the last line evaluates to will be returned. In this example since there only is one line, the last line is the first line.

Finally - put it all together.

object RugbyPlayerCollectionDemos {
  def main(args: Array[String]){
    println("Scala collections stuff!"); 
  // Case class remove need for boiler plater code.
  case class RugbyPlayerCaseClass(team: String, sprintTime100M: BigDecimal, name: String)
  def showSomeFilterTricks() {
    // team: String, sprintTime100M: Int, name: String
    val lukeFitzGerald = RugbyPlayerCaseClass("Leinster", 10.2, "Luke Fitzgerald");
    val fergusMcFadden = RugbyPlayerCaseClass("Leinster", 10.1, "Fergus McFadden");
    val rog = RugbyPlayerCaseClass("Munster", 12, "Ronan O'Gara");
    val tommyBowe = RugbyPlayerCaseClass("Ulster", 10.3, "Tommy Bowe");
    val leoCullen = RugbyPlayerCaseClass("Leinster", 15, "Leo Cullen");
    println(filterValidPlayers(List(lukeFitzGerald, fergusMcFadden, rog, tommyBowe, leoCullen)));

  def filterValidPlayers(in: List[RugbyPlayerCaseClass]) = 
     in.filter( == "Leinster").sortWith(_.sprintTime100M < _.sprintTime100M).map(;

The above program will output:
Scala collections stuff!
List(Luke Fitzgerald, Fergus McFadden, Leo Cullen) 

Something similar in Java

Pre Java 8, to implement the same functionality in Java would be a lot more code.
public class RugbyPLayerCollectionDemos { 
    public static void main(String args[]) {
     RugbyPLayerCollectionDemos collectionDemos = new RugbyPLayerCollectionDemos();
    public void showSomeFilterTricks() {
        // team: String, sprintTime100M: Int, name: String
        final RugbyPlayerPOJO lukeFitzGerald = new RugbyPlayerPOJO("Leinster", new BigDecimal(10.2), "Luke Fitzgerald");
        final RugbyPlayerPOJO fergusMcFadden = new RugbyPlayerPOJO("Leinster", new BigDecimal(10.1), "Fergus McFadden");
        final RugbyPlayerPOJO rog = new RugbyPlayerPOJO("Munster", new BigDecimal(12), "Ronan O'Gara");
        final RugbyPlayerPOJO tommyBowe = new RugbyPlayerPOJO("Ulster", new BigDecimal(10.3), "Tommy Bowe");
        final RugbyPlayerPOJO leoCullen = new RugbyPlayerPOJO("Leinster", new BigDecimal(15), "Leo Cullen");
        List rugbyPlayers = Arrays.asList(lukeFitzGerald, 
          fergusMcFadden, rog, tommyBowe, leoCullen);
     * Return the names of Leinster Rugby players in the order of their sprint times.
    public List filterRugbyPlayers(List pojos) {
        ArrayList leinsterRugbyPlayers = new    ArrayList();
        for (RugbyPlayerPOJO pojo: pojos) {
            if (pojo.getTeam().equals("Leinster")) {
        RugbyPlayerPOJO [] rugbyPlayersAsArray = leinsterRugbyPlayers.toArray(new   RugbyPlayerPOJO[0]);
        Arrays.sort(rugbyPlayersAsArray, new Comparator() {
            public int compare(RugbyPlayerPOJO rugbyPlayer1, RugbyPlayerPOJO rugbyPlayer2) {
                 return rugbyPlayer1.getSprintTime100M().compareTo(rugbyPlayer2.getSprintTime100M());
        List rugbyPlayersNamesToReturn = new ArrayList();
        for (RugbyPlayerPOJO rugbyPlayerPOJO: rugbyPlayersAsArray) {
        return rugbyPlayersNamesToReturn;
    class RugbyPlayerPOJO {
        private BigDecimal sprintTime100M;
        private String team;
        private String name;
        public RugbyPlayerPOJO(String team, java.math.BigDecimal sprintTime100M, String name) {
   = name;
            this.sprintTime100M = sprintTime100M;
   = team;
        public BigDecimal getSprintTime100M() {
            return sprintTime100M;
        public String getTeam() {
            return team;
        public String getName() {
            return name;

Does Java 8 help out?

Yes. According to the Project Lambda specsJava 8 will have similar looking filter,map and sort functions. The functionality in this post in Java 8 would look something like:
List rugbyPlayers = Arrays.asList(lukeFitzGerald, 
  fergusMcFadden, rog, tommyBowe, leoCullen);
List filteredPLayersNames = rugbyPlayers.filter(e -> e.getTeam.equals("Leinster")).
 sorted((a, b) -> a.getSprintTime100M() - b.getSprintTime100M()).mapped(e -> {return e.getName();}).into(new List<>());
So Java 8 is definetly catching up a great deal in this regard. But will it be enough?

Sunday, January 13, 2013

Scala: Do you partially understand this?

Nearly everyone who learns Scala can get confused over the word partial used in the contexts:
  • Partial functions
  • Partially applied functions 
Let's look at both.

Partially applied functions

Scala gets its functional ideas from classical languages such as Haskell (Haskell 1.0 appeared in same year as Depeche Mode's Enjoy the Silence and Dee Lite's Groove is in the Heart in 1990).  In functional languages a function that takes two parameters that returns one parameter can be expressed as a function which takes one of the input parameters and returns a function that takes the other input parameter and returns the same output parameter.

f(x1, x2) = y
f(x1) = f(x2) = y

A cheesey analogy would be to time travel back to 1990 and find yourself a juxebox. Put money in for two selections and select Depeche Mode first and Dee Lite second, walk away and throw a few shapes as they  are played one after the other.  Or, put in your money for two selections select Depeche Mode and then don't make another selection.  Don't walk away just yet.  The well engineered Juxebox should prompt you for another selection (give you another function) and then you can select Dee Lite (pass in the second parameter). So, the end output in both cases is the same music in the same order.

In Scala, when only some parameters are passed to a function to make another function it is said to be a partial application of that function.

So consider the function:
def minus(a: Int, b: Int) = "answer=" + (a-b)
Now, let's partially apply this function by passing in some parameters and making another function.
val minus50 = (a: Int) => minus(a, 50);
In this case minus50 is a partial application of minus.
We can do:
minus50(57); // outputs 7.
Note: we can also partially apply using the _ notation and a save ourselves a little bit of finger typing.
val minus50 = minus(_:Int, 50);

Partial functions

A partial function is a function that is valid for only a subset of values of those types you might pass into it. For example, Consider the mathematical function where x is set of all number from 1  to 100: 

f(x) =  x + 5;

A function is said to be partial if the function is only applied to a subset in set of element of x.
So if we only want to define the function

f(x) = x + 5

for the numbers 1,2,3,4,5,6,7 but not 8,9,10, ... - we define a partial function.
where x' = {1,2,3,4,5,6,7}

In Scala, a PartialFunction inherits from Function and adds two interesting methods:
  • isDefinedAt - this allows us to check if a value is defined for the partial function.
  • orElse - this allows partial functions to be chained. So if a value is not defined for a function it can be passed to another function. This is similar to the GoF Chain of Responsibility pattern.
Ok so open up a Scala REPL and create the following partial function which will add 5 to an integer as long as the integer is less than 7.
val add5Partial : PartialFunction[Int, Int] = {
  case d if (d > 0) && (d <= 7) => d + 5;
When you try this for a value less than or equal to 7, you will see the result no problem
scala > add5Partial(6);
res1: 11
When you try it for a value greater than 7 you don't get a nice clean answer.
scala> myPartial(42);
scala.MatchError: 42 (of class java.lang.Integer)
        at $anonfun$1.apply$mcII$sp(:7)
        at .(:9)
        at .()
        at .(:11)
        at .()
        at $print()
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(
        at java.lang.reflect.Method.invoke(

The use of isDefinedAt() should now be becoming apparent. In this case, we could do:
res3: Boolean = true

scala> add5Partial.isDefinedAt(42)
res4: Boolean = false
Ok so what about orElse? Well let's define another partial function which deals with numbers greater than 7 and less than 100. In such cases, lets' just add 4.
val add4Partial : PartialFunction[Int, Int] = {
  case d if (d > 7) && (d <= 100) => d + 5;
Now we can just do:
scala> val addPartial = add5Partial orElse add4Partial;
addPartial : PartialFunction[Int,Int] = <function1>
scala> addPartial(42);
res6: Int = 46
Ok, let's see how all this could be implemented in Java using Chain of Responsibility pattern.  Firstly, let's define a handler interface and an add5 and add4 implementation which will implement it.

public interface AdditionHandler {
    //reference to the next handler in the chain
    public void setNext(AdditionHandler handler);
    //handle request
    public void handleRequest(int number);

public class Add5Handler implements AdditionHandler {
    private AdditionHandler nextAdditionHandler = null;
    public void setNext(AdditionHandler hander)  {
        this.nextAdditionHandler = handler;

    public int handleRequest(int number) {
         if ((number > 0) && (number <= 7)) {
             return number + 5;
         } else {
             return nextAdditionHandler.handleRequest(number);

public class Add4Handler implements AdditionHandler {
    private AdditionHandler nextAdditionHandler = null;
    public void setNext(AdditionHandler hander)  {
        this.nextAdditionHandler = handler;

    public int handleRequest(int number) {
         if ((number > 7) && (number <= 100)) {
             return number + 4;
         } else {
             return nextAdditionHandler.handleRequest(number);
Now, let's create a class which will link the handlers.
public class AdditionProcessor {
   private AdditionHandler prevHandler;
   public void addHandler(AdditionHandler handler){
       if(prevHandler != null) {
       prevHandler = handler;
And of a course a client which actually invokes the functionality:
public class AdditionClient {
    private AdditionProcessor processor;
    public AdditionClient(){

    private void createProcessor() {
        processor = new AdditionProcessor();
        processor.addHandler(new Add5Handler());
        processor.addHandler(new Add4Handler());

    public void addRule(AdditionHandler handler) {

    public void requestReceived(int value){
        System.out.println("value=" + processor.handleRequest(value));  

    public static void main(String[] args) {
        AdditionClient client = new AdditionClient();

So Scala has some clear advantages here.  Or course, people will say 'ah but in Java you just just do...'
public int addFunction(int value) {
    if ((value > 0) && (value <= 7)) {
       return value + 5;
    } else if ((value > 7) && (value < 100)) {
       return value + 4;
    } else {
      // ...
And yes for this specific case, this will work. But what if your functions/ commands become more complex. Are you go to hang around in if / else land? Probably not. 'Til the next time, take care of yourselves.

Sunday, January 6, 2013

Scala function literals

Functions are an important part of the Scala language. Scala Functions can have a parameter list and can also have a return type. So the first confusing thing is what's the difference between a function and a method? Well the difference is a method is just a type of function that belongs to a class, a trait or a singleton object.
So what's cool about functions in scala? Well you can define functions inside functions (which are called local functions) and you can also have anonymous functions which can be passed to and returned from other functions. This post is about those anonymous functions which are referred to as function literals.
As stated, one of the cool things about function literals is that you can pass them to other functions. For example, consider snippet below where we pass a function to a filter function for a List.
List(1,2,3,4,5).filter((x: Int)=> x > 3)
In this case, the function literal is (x: Int)=> x > 3 This will output: resX: List[Int] = List(4, 5). => called "right arrow" means convert the thing on the left to the thing on the right. The function literal in this example is just one simple statement (that's what they usually are), but it is possible for function literals to have multiple statements in a traditional function body surrounded by {}. For example, we could say:
List(1,2,3,4,5).filter((x: Int)=>{
  println("x="+ x);
  x > 3;})
which gives:
resX: List[Int] = List(4, 5)
Now one of the key features of Scala is to be able to get more done with less code. So with that mindset, let's see how we can shorten our original function literal. Firstly, we can remove the parameter type.
List(1,2,3,4,5).filter((x)=> x > 3)
This technique is called target typing. The target use of the expression in this case, what is going to filter is allowed to determine the type of the x parameter. We can further reduce the strain on our fingers by removing the parentheses. This is because the parentheses were only there to show what was been referred to as Int in the parameter typing. But now the typing is inferred the brackets are superflous and can be removed.
List(1,2,3,4,5).filter(x => x > 3)
Shorten it even more? yeah sure... We can use the placeholder underscore syntax.
List(1,2,3,4,5).filter(_ > 3)
Underscores have different meanings in Scala depending on the context they are used. In this case, if you are old enough think back to the cheesey game show blankety blank.
This gameshow consisted of of sentences with blanks in them and the contestants had to make suggestions for what went into the blank. In this example, the filter function fills in the blanks with the values in the Lists it is being invoked on. So the filter function is the blankety blank contestant and the List (1,2,3,4,5) are what the filter function uses to fill the blank in.
So now our code is really neat and short. In Java to achieve the same, it would be:
Iterator<Integer> it = new ArrayList<Integer>(Arrays.asList(1,2,3,4,5)).iterator();
while( it.hasNext() ) {
    Integer myInt =;
    if(myInt > 3) it.remove();
So, here we can see where Scala make code shorted and development time quicker. Till the next time!

Tuesday, January 1, 2013

Scala - for loops

Right time to broaden the horizons. It's 2013 and I am going to start blogging about Scala which I am trying learn. I am going to start with for loops.
for(i <- 1 to 5){
It is quite easy to figure out what is going on here, without even mentioning the word Scala. Hey it's just a for loop and yeah it iterates from 1 to 5 and there's probably some type inference going on - since Scala is statically typed. That's all fine, but, I find it useful when trying to learn a new language, to learn the language of that language. For anyone coming from a Java background the '<-' is certaily warrants some noun. This is called a generator. Why? Because it generates individual vales from a range which in this case is the 1 to 5 part.

There is not much else interesting in this example except the intent of 2 spaces - Java programmers will be use to 4. So what else? Well there are two styles of for loops in Scala:foreach and for. The former is intended for a functional approach and I'll cover this is another post. The later is suited for imperative style. In fact the for and foreach constructs are an excellent example of how Scala facilitates both imperative and functional progamming.

Give me more?

Sure, let's have at look at more tricks with for loops using the imperative approach.
class ForLoopExample {
  def forExampleWithTo() {
    for (i <-1 to 5)
      println("Iteration " + i)
  def forExampleWithUntil() {
    for (i <-1 until 5)
      println("Iteration " + i)
  def forExampleWithMultipleRanges() {
    for (i <- 1 to 2; j <- 4 to 5) {
      println("Value of i=" + i);
      println("Value of j=" + j);
  def forExampleWithFilter() {
    for (i <- 1 to 5 if i % 2 == 0) {
      println("Filtered i=" + i);
  //storing results from for loop.
  def forExampleStoreValues() {
    val retVal = for{i <- 1 to 5 if i % 2 == 0}  yield i;
    println("retVal=" + retVal);    
So some points:
  1. There is no need to explictly make the class ForLoopExample public. This is because public is default access level in Scala. Where you said public in Java you say nothing in Scala.
  2. The only difference between forExampleWithTo and forExampleWithUntil is that one uses to in its range and the other uses until. In these examples to means 1,2,3,4,5 and until means 1,2,3,4 - i.e. the last element is not included.
  3. forExampleWithMultipleRanges shows how to iterate over multiple ranges. In addition, note the statement in the for loop are enclosed in {}. The {} are required when multiple statements are in each iteration. If there is only one statement they can be omitted.
  4. forExampleWithFilter shows how to filter out values from the list.
  5. forExampleStoreValues shows how to store the result of the iteration values from a for list.

Now how about invoking these for examples?

object MainRunner {
  def main(args: Array[String]){
    println("Scala stuff!");  // println comes from Predef which definitions for anything inside a Scala compilation unit. 
  def runForExamples() {
    val forLE = new ForLoopExample(); // No need to declare type.
    println("forExampleWith()=" + forLE.forExampleWithTo());    // 
    println("forExampleWithUntil=" +forLE.forExampleWithUntil);   //() brackets for method invocation not needed.
    println("forExampleWithFilter=" +forLE.forExampleWithFilter)   // semi colans not needed to end lines
    println("forExampleWithMultipleRanges=" +forLE.forExampleWithMultipleRanges);
    println("forExampleStoreValues=" +forLE.forExampleStoreValues)  
And for some more salient points:
  1. Rather than MainRunner being declared as a class, it is declared as an Object. This means it is singleton.
  2. The main() method is similar to Java's public static void main. Except there is no need for public (it's the default). There is no need for static (we are in a singleton) and there is no need to declare the return type. You see Scala you get more code with less typing.
  3. In some cases I omit the () from the method invocation. In cases when a method has no arguments Scala allows the omissions of the (). However, this notation should only be used when the method has no side effects i.e. the method does not change the state of anything - so I am only using it here to for purposed of illustration.
So that's it. I hope you had a great 2012 and an even bettet 2013.