Saturday, May 18, 2013

How could Scala do a merge sort?

Merge sort is a classical "divide and conquer" sorting algorithm. You should have to never write one because you'd be silly to do that when a standard library class already will already do it for you. But, it is useful to demonstrate a few characteristics of programming techniques in Scala. Firstly a quick recap on the merge sort. It is a divide and conquer algorithm. A list of elements is split up into smaller and smaller lists. When a list has one element it is considered sorted. It is then merged with the list beside it. When there are no more lists to merged the original data set is considered sorted. Now let's take a look how to do that using an imperative approach in Java.
public void sort(int[] values) {
   int[] numbers = values;
   int[] auxillaryNumbers = new int[values.length];
   mergesort(numbers, auxillaryNumbers, 0, values.length - 1);
}

private void mergesort(int [] numbers, int [] auxillaryNumbers, int low, int high) {
    // Check if low is smaller then high, if not then the array is sorted
    if (low < high) {
        // Get the index of the element which is in the middle
        int middle = low + (high - low) / 2;
       
        // Sort the left side of the array
        mergesort(numbers, auxillaryNumbers, low, middle);
        // Sort the right side of the array
        mergesort(numbers, auxillaryNumbers, middle + 1, high);
      
        // Combine them both
        // Alex: the first time we hit this when there is min difference between high and low.
        merge(numbers, auxillaryNumbers, low, middle, high);
    }
}

/**
 * Merges a[low .. middle] with a[middle..high].
 * This method assumes a[low .. middle] and a[middle..high] are sorted. It returns
 * a[low..high] as an sorted array. 
 */
private void merge(int [] a, int[] aux, int low, int middle, int high) {
    // Copy both parts into the aux array
    for (int k = low; k <= high; k++) {
        aux[k] = a[k];
    }

    int i = low, j = middle + 1;
    for (int k = low; k <= high; k++) {
        if (i > middle)                      a[k] = aux[j++];
        else if (j > high)                   a[k] = aux[i++];
        else if (aux[j] < aux[i])            a[k] = aux[j++];
        else                                 a[k] = aux[i++];
    }
}
 
public static void main(String args[]){
     ...
     ms.sort(new int[] {5, 3, 1, 17, 2, 8, 19, 11});
     ...
}

}
Discussion...
  1. An auxillary array is used to achieve the sort. Elements to be sorted are copied into it and then once sorted copied back. It is important this array is only created once otherwise there can be a performance hit from extensive array created. The merge method does not have to create an auxiliary array however since it changes an object it means the merge method has side effects.
  2. Merge sort big(O) performance is N log N.
Now let's have a go at a Scala solution.
  def mergeSort(xs: List[Int]): List[Int] = {
    val n = xs.length / 2
    if (n == 0) xs
    else {
      def merge(xs: List[Int], ys: List[Int]): List[Int] =
          (xs, ys) match {
          case(Nil, ys) => ys
          case(xs, Nil) => xs
          case(x :: xs1, y :: ys1) =>
            if (x < y) x::merge(xs1, ys)
            else y :: merge(xs, ys1)
      }
      val (left, right) = xs splitAt(n)
      merge(mergeSort(left), mergeSort(right))
    }
  }  
Key discussion points:
  1. It is the same divide and conquer idea.
  2. The splitAt function is used to divide up the data up each time into a tuple. For every recursion this will new a new tuple.
  3. The local function merge is then used to perform the merging. Local functions are a useful feature as they help promote encapsulation and prevent code bloat.
  4. Neiher the mergeSort() or merge() functions have any side effects. They don't change any object. They create (and throw away) objects.
  5. Because the data is not been passed across iterations of the merging, there is no need to pass beginning and ending pointers which can get very buggy.
  6. This merge recursion uses pattern matching to great effect here. Not only is there matching for data lists but when a match happens the data lists are assigned to variables:
    • x meaning the top element in the left list
    • xs1 the rest of the left list
    • y meaning the top element in the right list
    • ys1 meaning the rest of the data in the right list
This makes it very easy to compare the top elements and to pass around the rest of the date to compare. Would the iterative approach be possible in Java? Of course. But it would be much more complex. You don't have any pattern matching and you don't get a nudge to declare objects as immutable as Scala does with making you make something val or var. In Java, it would always be easier to read the code for this problem if it was done in an imperative style where objects are being changed across iterations of a loop. But Scala a functional recursive approach can be quite neat. So here we see an example of how Scala makes it easier to achieve good, clean, concise recursion and a make a functional approach much more possible.

Friday, May 17, 2013

Coursera's Scala course

Coursera run an excellent Scala course which I just had the opportunity of participating in. The course duration is seven weeks. Each week consists of about 1.5 hours of lectures and then an assignment which can take anything between an hour to about 5 hours.  The course syllabus  is outlined here.  So personal opinion time...

Was it worth it?  Absolutely.  Unless you are a complete pro in Scala and Functional Programming you will learn something from this course - most importantly a deeper understanding of the FP paradigm.  

I remember many eons ago when I first started learning OO, like many noobs I thought I understood OO when I understood polymorphism, inheritance, encapsulation and could name check a few design patterns.  It took me a while to really realise the difference between good and bad abstractions and how dependencies that look benign can drastically increase software entropy in a project.  Similarly many people might approach FP thinking it is just all about function literals,  2nd order functions, inner functions and closures.   Well the first important point to make about this course is that it does a super job of emphasising the importance of smart and clever usage of recursion in FP.  This was not apparent to me before the course.  The reason why recursion is a big deal in FP is of course because immutable state is a big deal in FP. That is easier to achieve when you pass data between iterations as in recursion than an imperative style loop which can usually mean some object(s) is being changed across iterations. 

Now, I hope that made some sense. Because the real brain tease is when you are given a problem that you could do with one arm tied behind your back using a for loop and told to do it with recursion.  It takes a lot of practise to get really good at recursion and it is something I still have to practise more myself but the course really made me think about it much much much more than I ever did previously.

So what else did I learn?
  1. Buzz words - exact difference between function application and function type
  2. Left association for function application and right association for function type.
  3. Passing functions around anonymously - you should only be rarely using def for functions that are being passed
  4. The Art of DRY (Don’t repeat yourself) in FP.  Functions should always be short and if it makes sense to abstract out common parts do so
  5. Difference between val, lazy val, def evaluation times (evaluated once, evaluated lazily and evaluated every time respectively
  6. The power of pattern matching, especially when using it with recursion
  7. Streams – lazy lists and the memorization potential
  8. It is extremely difficult doing a Scala course when you have two very young children.
Overall a great course. I hope to elaborate on some of  ideas and topics in future posts.