Sunday, July 16, 2017

Outputting the given, when, then, Extending Spock

Spock is a Java testing framework, created in 2008 by Peter Niederwieser a software engineer with GradleWare, which facilitates amongst other things BDD.  Leveraging this example, a story may be defined as:
Story: Returns go to stock

As a store owner
In order to keep track of stock
I want to add items back to stock when they're returned.

Scenario 1: Refunded items should be returned to stock
Given that a customer previously bought a black sweater from me
And I have three black sweaters in stock.
When he returns the black sweater for a refund
Then I should have four black sweaters in stock.

Scenario 2: Replaced items should be returned to stock
Given that a customer previously bought a blue garment from me
And I have two blue garments in stock
And three black garments in stock.
When he returns the blue garment for a replacement in black
Then I should have three blue garments in stock
And three black garments in stock.
Spock makes it possible to map tests very closely to scenario specifications using the same given, when, then format. In Spock we could implement the first scenario as:
class SampleSpec extends Specification{
    def "Scenario 1: Refunded items should be returned to stock"() {
        given: "that a customer previously bought a black sweater from me"
        // ... code 
        and: "I have three black sweaters in stock."
        // ... code
        when: "he returns the black sweater for a refund"
        // ... code
        then: "I should have four black sweaters in stock."
        // ... code
    }
}

What would be nice would be to ensure accurate mapping of test scenario requirements to test scenario implementation. We could get someway down this path if we could output the syntax of the given, when, then from our test.  Spock allows us to add this functionality through its extension framework.

So, let's say our BA is really curious and wants more confidence from the developer that they stuck to the same given, when, then format and their code is in-sync. They want to get this information easily.   Developer could provide this information by first defining this annotation
import java.lang.annotation.*
import org.spockframework.runtime.extension.ExtensionAnnotation

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
@ExtensionAnnotation(ReportExtension)
@interface LogScenarioDescription {}
Followed by this implementation:
import org.apache.log4j.Logger
import org.spockframework.runtime.AbstractRunListener
import org.spockframework.runtime.extension.AbstractAnnotationDrivenExtension
import org.spockframework.runtime.model.FeatureInfo
import org.spockframework.runtime.model.SpecInfo


class LogScenarioDescriptionExtension extends AbstractAnnotationDrivenExtension; {
    final static Logger logger = Logger.getLogger("scenarioLog." + ReportExtension.class);

    @Override
    void visitSpecAnnotation(Report annotation, SpecInfo spec) {
        spec.addListener(new AbstractRunListener() {
            @Override
            void afterFeature(FeatureInfo feature) {
                if (System.getEnv("logScenarios")) {
                    logger.info("***SCENARIO TEST:*** " + feature.name)
                    for (block in feature.blocks) {
                        logger.info(block.kind);
                        for (text in block.texts) {
                            logger.info(text)
                        }
                    }
                }
            }
        })
    }
}
This will then be applied to the test
@LogScenarioDescription
class SampleSpec extends Specification{
  //...
When the test is executed it gives the following output:
***SCENARIO TEST:*** Scenario 1: Refunded items should be returned to stock
GIVEN
that a customer previously bought a black sweater from me
AND
I have three black sweaters in stock.
WHEN
he returns the black sweater for a refund
THEN
I should have four black sweaters in stock.
output to a specific logfile for scenario logging by using the following log4j:
log4j.rootLogger=INFO, stdout

log4j.logger.scenarioLog.extension.custom=INFO, scenarioLog

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%m%n

log4j.appender.scenarioLog=org.apache.log4j.FileAppender
log4j.appender.scenarioLog.File=logs/scenario.log
log4j.appender.scenarioLog.layout=org.apache.log4j.PatternLayout
log4j.appender.scenarioLog.layout.ConversionPattern=%m%n
and now you have a logfile that your BA, QA can read! This helps foster an Agile culture of collaboration and ATDD where it possible to check that test scenarios implemented with those that were agreed.

Tuesday, June 13, 2017

CoderDojo, so what's the point?

Initially I was skeptical of CoderDojo. Here's another thing IT professionals are doing for free. Why isn't there Economics Dojo,
Medical Dojo, Legal Dojo?  Why is a profession that at times is
extremely competitive, has long demanding hours, low job security being glamorized beyond a point of credible fiction?  Why are people being told they need to code when there are plenty of careers where you will never need to code and always will be?  And shouldn't our kids be out getting exercise, exploring nature?  At least that's what Steve Jobs thought.

It didn't stop there.  There is this trap that all parents fall for that if they get their kids involved something young they'll be a step ahead and if they don't they'll be a step behind.  CoderDojo is not immune to this quasi chimera.  So, I started thinking over my many glorious years in the IT profession, I have worked with a range of coders from the exceptional to the cryptically insane.  If I was to differentiate between a good coder and a not so good coder the number one differentiator is habits.   Yes habits.  Good coders follow good habits:
  1. They write unit tests first.
  2. They are always seeking feedback on their code.
  3. If something complex is happening, they seek to build consensus.
  4. They follow agreed conventions and industry standards.
These good habits mean you end up with something maintainable.   On the other side, coders with bad habits:
  1. They never write adequate tests.
  2. They don't take feedback well (usually because they are not used to it).
  3. If something complex is happening, they hack something up and you find out about it later than you should.
  4. They don't follow conventions - just stick to they way they do things that only makes sense to them.
So someone with the same aptitude, the same I.Q., the same hairstyle ends up producing something
that is much more difficult to maintain.

Don't get me wrong, there was a time when aptitude was much more important.  Complex C++ memory management, any assembly code required serious aptitude and a base level will always be needed.  There are about 50 important principles from encapsulation to recursion that you need some sort of aptitude to get.  But now with Stackoverflow, Google, lots of great libraries and frameworks mean it is really more about habits.  Pick a technology everyone is using and you get lots of support for free.

So learning to code at age 9, is that going to do anything to help you get good habits? Of course it isn't.  So then what's the point?  The point is for one thing only: fun.

As a friend recently said, it is the new mechano and lego.  Let them code but make sure they still play with lego and mechano.

So in my case, my older son (age 7) recently came home from school crying because he wanted to learn scratch. My wife gave him an old laptop and after reading a few PDFs he was away coding a little game talking about loops, scripts and wanting to change the mp3 files.  I was a bit shocked watching him stare at the screen trying to figure out his own code, coming out with tech babble and being able to get something working all by himself.  Next his friend was over. Is this pair programming for kids?

Anyway, no doubt, this new generation will have more information available to them than any before. This provides all sorts of creative avenues not just in code.  However, I can't agree with articles such as this recent one from RTE, telling us why your kid should code?

Kids should do what is healthy, safe and fun.  If they find Economics, Law or Medicine more stimulating than coding great.  Perhaps, the experts in those professions could also run free classes for all kids.   Alas, I somewhat doubt that will ever happen...



Wednesday, June 8, 2016

Triggering a Client Cache Refresh

More and more web sites are using a single page style architecture.  This means there is a bunch of static resources (aka assets) including:
  • JS 
  • CSS
  • HTML templates
  • Images
  • ...
all residing in the client's browser.  For performance reasons, the static content is usually cached. And like all caches, eventually the data gets stale and the cache must be refreshed.  One excellent technique for achieving this in the web tier is Cache busting (see: here and here).
However, even using this excellent pattern there still needs to be some sort of trigger to indicate that things have gone out of date.  Some options:

HTML5 Sockets

In this case a server can send a request to any client to say go and get the new stuff.

Comet style polling

The Client consistently polls the server in the background asking if there is a new version available. When the server says there is, the client is triggered to start cache busting.

Both of these are good approaches but in some edge cases may not be available:
  1. Architecture may not allow HTML5 sockets - might be some secure banking web site that just doesn't like it.
  2. The Comet style polling might be too intensive and may just leave open edge cases where a critical update is needed but the polling thread is waiting to get fired. 

Client refresh notification pattern

Recently, on a project neither of the above approaches were available, so I needed something else which I shall now detail.  The solution was based on some existing concepts already in the architecture (which are useful for other things) and some new ones that needed to be introduced:
  1. The UI always knows the current version it is on.  This was already burnt into the UI as part of the build process.
  2. Every UI was already sending up the version it is on in a custom header. It was doing this for every request.  Note: this is very useful as it can make diagnosing problems easier. Someone seeing weird problems, nice to be able see their are on a old version straight away.
The following new concepts were then introduced:
  1. The server would now always store the latest client version it has received.  This is easy to do and again handy thing to have anyway.
  2. The server would then always add a custom header to every response to indicate the latest client version it has received.  Again, useful thing to have for anyone doing a bit of debugging with firebug or chrome web tools
  3. Some simple logic would then be in the client, so that when it saw a different version in response to the one it sent up, it knows there's a later version out there and that's the trigger to start cache busting!   This should be done in central place for example an Angular filter (if you are using Angular)
  4. Then as part of every release, just hit the server with latest client in the smoke testing. 
  5. As soon as any other client makes a request, it will be told that there is a later client version out there and it should start cache busting.

So the clients effectively tell each other about the latest version but without ever talking to each other it's all via the server.  It's like the mediator pattern. But it's still probably confusing. So let's take a look at a diagram.



With respect to diagram above: 
  • UI 1 has a later version (for example 1.5.1) of static assets than UI 2  (for example 1.5.0)
  • Server thinks the latest version static assets is 1.5.0
  • UI 1 then makes a request to the server and sends up the version of static assets it has e.g. 1.5.1
  • Server sees 1.5.1 is newer than 1.5.0 and then updates its latest client version variable to 1.5.1
  • UI 2 makes a request to the server and sends up the version of static assets it has which is 1.5.0
  • Server sends response to UI 2 with a response header saying that the latest client version is 1.5.1
  • UI 2 checks and sees that the response header client version is different to the version sent up and then starts busting the cache
Before the observant amongst you start saying this will never work in a real enterprise environment as you never have just one server (as in one JVM) you have several - true.  But then you just store the latest client version in a distributed cache (e.g. Infinispan) that they can all access. 

Note: In this example I am using two web clients and one back end server.  But note the exact same pattern could be used for back end micro-services that communicate with each other.  Basically anything where there is a range of distributed clients (they don't have to be web browsers) and caching of static resources is required this could be used. 

Until the next time, take care of yourselves. 



Tuesday, June 7, 2016

Why Agile can fail?

Most Development teams now claim they are doing Agile and are usually doing some variant of Scrum.  In this blog post I propose three reasons why teams can struggle with Scrum or any form Agile development. And conveniently they all begin with R.

Requirements

In the pre-agile days, requirements where usually very detailed and well thought out.  For example, when I worked in Ericsson, before we moved to the Agile methodology RUP, the projects were based on a classical Waterfall model that was a very high standard (usually around CMM level 3).  The requirements came from System experts who had lots of experience in the field, were very technically skilled and had incredible domain knowledge. Their full time job was to tease things out knowing that they had effectively only one chance to get it right.

In the Agile world, with shorter iteration cycles and much more releases there are chances to make something right when you get it wrong.  Functionality can be changed around much more easily.   This is good.   It is easier for customer collaboration and thus enable more opportunities to tweak, fine tune and get all stakeholders working together sculpting the best solution.

However, because the penalty for getting requirements wrong is not as great as it is in the Waterfall model it can mean that the level of detail and clarity in requirements can start becoming insufficient and before you know development time in the sprint gets wasted trying to figure what is actually required. The key is to get the balance right.  Enough detail so that there can be clear agreement across the dev team, customer and any other stakeholders about what is coming next, but not so much detail that people are just getting bogged down in paralysis analysis and forgetting that they are supposed to be shipping regularly.

I suggest one process for helping get the balance between speed and detail for your requirements in this blog post.

Releases

In the Agile methodology Scrum you are supposed to release at the end of every sprint.  This means instead of doing  1 - 4 releases a year you will be doing closer to 20 if not more.  If your release is painful your team will struggle with any form of Agile.  For example, say a full regression test, QA, bug fixing, build, deploy etc takes 10 days (including fixing any bugs found during smoke testing) it means that 20 * 10 = 200 man days are going to releases. Whereas in the old world,  with 4 release it would just be 4 * 10 = 40 days. In case it's not obvious, that's a little bit regressive.

Now, the simple maths tells us that a team with long release cycle (for whatever reason) will struggle releasing regularly and will thus struggle with any Agile methodology.

To mitigate this risk happening, it is vital that the team has superb CI with very high code coverage and works with a very strict CI culture.  This includes:
  • Ensuring developers are running full regressing tests on feature branches before merging into dev branches to minimise broken builds on dev branches
  • Fix any broken build as a priority
  • No checking into a broken build
  • Tests are written to a high quality - they need to be as maintainable as your source code
  • Coding culture where the code is written in a style so it is easily testable  (I'll cover this in a separate blog post)
  • CI needs to run fast.  No point having 10,000 tests, if they take 18 hours to run. Run tests in parallel if you have to.  If your project is so massive that it really does take 18 hours to run automated tests, you need to consider some decomposition. For example, a micro-service architecture where components are in smaller and more manageable parts that can be individually released and deployed in a lot less than 18 hours.
For more information on how to achieve great CI, see here and here.

By mastering automated testing, the release cycle will be greatly shortened.  The dev team should be aiming towards a continuos delivery model where in theory any check could be released if the CI says it is green.  Now, this all sounds simple, but it is not.  In practise you need skilled developers to write good code and good tests.  But the better you are at it, the easier you will be able to release, the easier you will be able to truly agile.

Note: One of the reasons why the micro-services architectural style has become popular is because it offers an opportunity to avoid big bang releases and instead only release what needs to be. That's true.  However, most projects are not big enough or complex enough to need this approach.  Don't just jump on a bandwagon, justify every major decision.

Roles 

The most popular agile methodology Scrum only defines 3 Roles:
  • Product Owner 
  • Scrum Master
  • Dev Team. 
That's it.  But wait sec! No Tech Lead, no Architect, no QA, no Dev manager - surely you need some of these on a project.
Of course you do. But this can be often forgotten.  Scrum is a mechanism to help manage a project, it is not a mechanism to drive quality
engineering, quality architecture and minimise technical debt.  Using Scrum is only one aspect of your process.  While your Scrum Master might governs process they don't have to govern architecture or engineering.   They may not have a clue about such matters.  If all the techies are just doing their own stories trying to complete them before the next show and tell and no-one is looking at the big picture, the project will quickly turn into an unmanageable ball of spaghetti.

This is a classical mistake at the beginning of a project. Because at the beginning there is no Tech debt. There are also no features and no bugs and of course all of this is because there is no code! But, the key point is that there is no tech debt. Everyone starts firing away at stories and there is an illusion of rapid progress but if no-one is looking at the overall big picture, the architecture, the tech debt, the application of patterns or lack of, after a few happy sprints the software entropy will very quickly explode.  Meaning that all that super high productivity in the first few weeks of the progress will quickly disappear.

To mitigate this,  I think someone technical has to back away from the coding (especially the critical path) and focus on the architecture, the technical leading, enforcing code quality. This will help ensure good design and architecture decisions are made and non-functional targets of the system are not just well defined but are met.  It is not always a glamorous job but if it ain't done, complexity will creep in and soon make everything from simple bug fixing, giving estimates, delivering new features all much harder than they should be.

In the Scrum world it is a fallacy to think every technical person must be doing a story and nothing else.  It is the equivalent of saying that everyone building a house has to be laying a brick and then wondering why the house is never finished because when the bricks never seem to line up.



Someone has to stand back ensure that people's individual work is all coming together, the tech debt is kept at acceptable levels and any technical risks are quickly mitigated.

Until the next time, take care of yourselves.