Liz Douglass

Archive for December 2009

Book Club: Working Effectively With Legacy Code – Chapters 17, 18 and 19 (Michael Feathers)

leave a comment »

This week at book club we continued the three-chapter theme, taking on chapters 17, 18 and 19 of Working Effectively With Legacy Code. Chapter 17 is the most lengthy of the three and we spent almost all of our time discussing that one.

Chapter 17: My Application Has No Structure

In this chapter Feathers discusses why code bases degrade. He says that “They (applications) might have started out with a well-thought-out architecture, but over the years, under schedule pressure, they can get to the point where nobody really understands their complete structure.” He gives three possible reasons why a team may be unaware of this happening:

  • The system is so complex that it takes a long time to get the big picture
  • The system can be so complex that there is no big picture
  • The team is in a very reactive mode, dealing with emergency after emergency so much that they lose sight of the big picture

Feathers details some suggestions for how to communicate the intent of a system including ‘Telling the Story of the System’, a process where “One person starts off by asking the other ‘What is the architecture of the system?’ Then the other person tries to explain the architecture of the system using only a few concepts.” This is useful because “Often when we force ourselves to communicate a very simple view of the system, we can find new abstractions.”

  • I think all of us have used this technique when first coming to a project to try and get an understanding of the system. I usually have this type of conversation with each pair for the first couple of weeks on a project.
  • Ahrum said that in his experience developers find it very difficult to explain a system simply because we generally go into too much detail. Giving information about logging and security are examples including too much noise in the explanation.
  • Tom said that it’s important to speak with everyone on the project, not just the architect. He finds that architects generally focus too much on application layering.
  • I think this technique is related to Dan’s idea of having a project shaman.

Feathers also discussed Naked CRC. This is a lightweight version of CRC where there is no writing on the cards. “The person describing he system uses a set of blank index cards and lays them down on a table one by one. He or she can move the cards, point at them, or do whatever else is needed to convey the typical objects in the system and how they interact”. I’ve never used this approach before but Anita and I both said that we would like to try it because we think we’re visual learners. Tom suggested that combining the cards and a whiteboard could also be useful, especially to show interactions between components.

We also spent some time discussing other techniques for getting the big picture of an application, especially when encountering it for the first time:

  • Reading the code can be a good way to spot major concepts or the lack of them.
  • Raphael said that it’s useful to listen to the language that people use to describe the system and to compare that to the concepts in the code. Sarah and Mark both recently posted on this.
  • Tom said that he likes to make lots of mindmaps when he first goes onto a new project. He said he generally likes to observe for about a month before he attempts any large refactorings.

Written by lizdouglass

December 19, 2009 at 7:28 am

Posted in Uncategorized

Tagged with

Adding a Spring aspect

leave a comment »

Yesterday Tom and I added a Spring aspect to our front end project. My only previous encounter with aspects was when I read a Spring book about a year ago. So, what is an aspect? According to the documentation “Aspects enable the modularization of concerns such as transaction management that cut across multiple types and objects”.

Tom and I needed to redirect the user to an ‘eek’ page if they tried to do some transactions when their account was in a bad way. An aspect seemed like a tool for the job.  But what sort of aspect? As Tom explained to me, the Around advice is the only type that can be used to change the flow of execution:

Around advice: Advice that surrounds a join point such as a method invocation…. Around advice can perform custom behavior before and after the method invocation. It is also responsible for choosing whether to proceed to the join point or to shortcut the advised method execution by returning its own return value or throwing an exception.

We created the new aspect in these steps:

1. Create a new method annotation. This annotation was applied to all the controller method join points. According to the Spring documentation this is what is known as an “@annotation pointcut designator – the subject of the join point matches only methods that have the given annotation.”

public @interface TransactionPermitted {

2. Create the aspect using TDD. Our aspect checks to see whether the the users account is in a state and redirects to the “transactions-not-permitted” view if this is the case. We used the example in the documentation to create our aspect. Tom pointed out a couple of thing while we were doing this that I think are important to remember:

  • Aspects need a no argument constructor.
  • From the docs: “The value returned by the around advice will be the return value seen by the caller of the method”. We had to make sure that all the handler methods advised by this aspect also return a string (and not a ModelAndView for example).
public class TransactionPermittedAspect {

private final Logger logger = Logger.getLogger(TransactionPermittedAspect.class);

private FooRepository fooRepository;

public void setFooRepository(FooRepository fooRepository) {
this.fooRepository = fooRepository;

public Object intercept(ProceedingJoinPoint pjp) throws Throwable {
logger.debug("Checking whether transactions are permitted");

Widget widget = AspectUtils.getArgumentOfType(pjp.getArgs(), Widget.class);
Bar bar = fooRepository.getBar(widget);

if (bar.hasASomeBadCondition()) {
return "transactions-not-permitted";
return pjp.proceed();


Our aspect (above) uses AspectUtils. This class, created by Tom, abstracts the argument searching and casting:

public class AspectUtils {

public static <T> T getArgumentOfType(Object[] args, Class<T> type) {
for (Object arg : args) {
if (type.isInstance(arg)) {
return type.cast(arg);
throw new IllegalArgumentException("Cannot find argument of " + type);

3. Apply the annotation pointcut to the target handler methods. This step was trivial, but adding them using TDD was interesting. Each class with a method annotated by our pointcut now has a test like this:

public void shouldCheckThatTransactionsArePermittedForFoo() {
assertNotNull("Transaction permission check not found", myMethod().getAnnotation(TransactionPermitted.class));

private Method myMethod() {
return MethodFinder.findRequestHandlerMethod(WibbleController.class, "myMethod");

Note that the MethodFinder looks for the target method with the a particular name and a RequestMapping annotation:

import static;
import org.springframework.web.bind.annotation.RequestMapping;

import java.lang.reflect.Method;

public class MethodFinder {

public static Method findRequestHandlerMethod(Class<?> type, String methodName) {
Method[] methods = type.getMethods();
for (Method method : methods) {
if (method.getName().equals(methodName) && (method.getAnnotation(RequestMapping.class) != null)) {
return method;
fail("Cannot find method named [" + methodName + "] with RequestMapping annotation in " + type.getName());
return null;


Adding this around aspect took us a very short time and it perfectly suited our need.

Written by lizdouglass

December 16, 2009 at 10:19 am

Posted in Uncategorized


leave a comment »

Today Tom and I were looking at our DefaultHttpClient class. This class wraps/adapts an Apache HttpClient. It implements our HttpWebClient interface:

public interface HttpWebClient {

    public HttpResponse post(String endpointURL, String requestBody);

    public HttpResponse get(String endpointURL, QueryString query);

Until today we didn’t have any tests for just this class, although there was some coverage from integration tests for other classes. We added tests that are very similar to some written by Cam and Silvio in another project (full credit to them). There’s a couple of interesting things happening our test class (below):

  • A Jetty server is created in the setup method. The DefaultHttpWebClient test instance makes a request to this server.
  • The findFreePort method uses to identify a free port for a Jetty Server to run on.
  • Each test sets the handler for the test Jetty server as a new TestHandler (line 84). The TestHandler class implements the Jetty AbstractHandler interface. It delegates to an overloaded handle method that is defined in each test method.
  • In the shouldPerformHttpPost test the handle method returns a response with an OK status code. It also puts some information into a map called ‘inspector’. Request information is put into the ‘inspector’ and it is used in several assertions.

Here tis:

public class DefaultHttpWebClientIntegrationTest {

    private int freePort;
    private Server server;

    public void setup() throws Exception {
        freePort = findFreePort();
        server = new Server(freePort);

    public void teardown() throws Exception {

    private int findFreePort() throws Exception {
        ServerSocket socket = new ServerSocket(0);
        int port = socket.getLocalPort();
        return port;

    public void shouldPerformHttpPost() {
        final Map<String, String> inspector = Maps.create();

        server.setHandler(new TestHandler() {
            public void handle(HttpServletRequest request, HttpServletResponse response) throws IOException {

                inspector.put("method", request.getMethod());
                inspector.put("contentType", request.getContentType());
                inspector.put("body", IOUtils.toString(request.getInputStream()));

                response.getWriter().print("test response");

        DefaultHttpWebClient client = new DefaultHttpWebClient();
        HttpResponse response ="http://localhost:" + freePort + "/test", "test payload");

        assertThat(inspector.get("method"), equalTo("POST"));
        assertThat(inspector.get("contentType"), equalTo("text/xml; charset=UTF-8"));
        assertThat(inspector.get("body"), equalTo("test payload"));

        assertThat(response.isOK(), equalTo(true));
        assertThat(response.getContentType(), startsWith("text/plain"));
        assertThat(response.getResponseBodyAsText(), equalTo("test response"));

    public void shouldPerformHttpGet() {
        final Map<String, String> inspector = Maps.create();

        server.setHandler(new TestHandler() {
            public void handle(HttpServletRequest request, HttpServletResponse response) throws IOException {

                inspector.put("method", request.getMethod());
                inspector.put("foo", request.getParameter("foo"));

                response.getWriter().print("test response");

        QueryString queryString = new QueryString();
        queryString.add("foo", "bar");

        DefaultHttpWebClient client = new DefaultHttpWebClient();
        HttpResponse response = client.get("http://localhost:" + freePort + "/test", queryString);

        assertThat(inspector.get("method"), equalTo("GET"));
        assertThat(inspector.get("foo"), equalTo("bar"));

        assertThat(response.isOK(), equalTo(true));
        assertThat(response.getContentType(), startsWith("text/plain"));
        assertThat(response.getResponseBodyAsText(), equalTo("test response"));

    private static abstract class TestHandler extends AbstractHandler {

        public void handle(String target, HttpServletRequest request, HttpServletResponse response, int dispatch)
                throws IOException, ServletException {

            handle(request, response);
            ((Request) request).setHandled(true);

        public abstract void handle(HttpServletRequest request, HttpServletResponse response) throws IOException;

Written by lizdouglass

December 14, 2009 at 10:19 am

Posted in Uncategorized

Mock Objects

with 2 comments

Over the course of the last year or so I’ve been learning to play tennis. I’ve definitely improved from where I started (although it’s not hard to improve from disastrously bad). Last night I was playing and noticed that my serve had deteriorated somewhat. The solution to getting it on track was to go back to basics and practice doing the simplest and smallest movement that I could (ie not complicating the serve with excessive movements).

This morning I was reading Mock Roles, Not Objects and I was was reminded of my tennis experiences. It refreshed my thinking on the purpose and practices of TDD and mocking. There were several quotes from the paper that stood out for me:

  • “Using TDD has many benefits but the most relevant is that it directs the programmer to think about the design of code from its intended use, rather from its implementation.”
  • “Mock Objects changes the focus of TDD from thinking about the changes in state of an object to thinking about its interactions with other objects.”
  • “Writing tests drives a framework to think about functionality. Mock Objects provides a framework for making assertions about those relationships and for simulating responses.”

Section 4 of the paper discusses Mock Objects in practices, detailing some guidelines for how to use them properly:

  • “Only mock types you own”. Matt mentioned this one at our book club the other day in the context of skinning and wrapping APIs.
  • “Don’t use getters”. Why? “Getters expose implementation, which increases the coupling between objects and allows responsibilities to be left in the wrong module. Avoiding getters forces an emphasis on object behaviour rather than state..”. I was taught not to use getters by Nick and I find myself discussing it often with each pair.
  • “Specify as little as possible in a test”…”One of the risks with TDD is that tests become ‘brittle’, that is they fail when a programmer makes unrelated changes to application code.” This is on of the most common criticisms of TDD that I have heard from other developers. The paper goes onto say that the cause is that “…(the tests) have been over-specfied to check features that are an artefact of implementation, not an expression of some requirement in the object.” I’ve written tests that fall into in this category and I think it can be quite challenging to avoid doing so. Steve Freeman, one of the authors of the paper, blogged about this situation happening in a Coding Dojo earlier this year.

From time to time it’s useful to revisit papers like Mock Roles, Not Objects in order to remind yourself of the basics what you’re doing…. (and remember to watch the ball when playing tennis!)

Written by lizdouglass

December 12, 2009 at 10:31 am

Posted in Uncategorized

Tagged with ,

Book Club: Working Effectively With Legacy Code – Chapters 14, 15 and 16 (Michael Feathers)

leave a comment »

Mark has returned to the UK so I’m continuing in his tradition of posting about the Thursday technical book club 🙂

This week we continued going through Working Effectively With Legacy Code, tackling an unprecedented 3 chapters:

Chapter 14: Dependencies on Libraries Are Killing Me

In this chapter Feathers describes how an overuse of direct calls to libraries can cause pain. He suggests that wrapping the API can create a nice seam for testing, so long as there are no constraints imposed by the use of language features such as final/sealed. As Matt pointed out this ties into the following chapter where Feathers discusses when the skinning and wrapping approach is most appropriate.

Chapter 15: My Application Is All API Calls

Feathers presents two approaches for improving the landscape when the application is all API calls:

1. Skin and Wrap the API

  • Feathers introduced this idea in chapter 10.
  • It is very useful for creating a separation layer from third party libraries.
  • As Ahrum pointed out this can sometimes be a difficult task if there is a proliferation of library calls in the API.
  • Matt also noted that this approach has more of a method level granularity, where as responsibility-based extraction has more of a class/object level granularity.
  • We had some discussion about how far to take the skinning and wrapping – for example would you skin and wrap the I/O classes? I’m not entirely sure where I’d draw the line but this does seem like something that could be pushed to an unnecessary extreme.

2. Responsibility-Based Extraction

Feathers says that this approach is better suited to more complicated APIs. He writes:

“When we have a system that looks like nothing but API calls, it helps to imagine that it is one big object and then apply the responsibility-separation heuristics…. We might not be able to move toward a better design immediately, but just the act of identifying the responsibilities can make it easier to made better decisions as we move forward.”

Chapter 16: I Don’t Understand The Code Well Enough To Change It

In this chapter Feathers details some practices that can be used to get a better understanding of the code, including:

  • Sketching out important things that you find in the code and connecting the concepts as you understand them.
  • Effect sketching
  • Printing the code out and marking it up in different colours to indicate different responsibilities. I’ve not done this before but it may be useful when doing responsibility-based extraction.

Feathers also says that you should delete unused code. As Anita quite rightly pointed out, deleting unused code extends beyond just commented out code and can be hard to detect. For example, there may be if-condition blocks that will never called in the application, but may be unit tested as though they are.

Written by lizdouglass

December 10, 2009 at 11:51 am

Posted in Uncategorized

Tagged with

Nested Diagnostic Contexts

leave a comment »

On my current project we have what I think is a very well thought out logging strategy. We generate a number of logs that serve a variety of different purposes. One thing that is common across all of them is that they are configured to print information from Nested Diagnostic Contexts (NDCs). An “NDC is an object that Log4j manages per thread as a stack of contextual information.” There is a very clear and short example of using an NDC and the output it can produce here.

In our application we use the NDC class in lots of places. For example we have a LoggingContextInterceptor that creates and destroys an NDC for our controller methods:

public class LoggingContextInterceptor extends HandlerInterceptorAdapter {

    private final UrlPathHelper helper = new UrlPathHelper();

    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler)
            throws Exception {

        return true;

    public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex)
            throws Exception {


Note that in the afterCompletion method above we are calling NDC.remove as well as NDC.clear. According to the API documentation the NDC clear method will empty and context information. The documentation also states that you need to call the remove method when exiting the thread because “this ensures that the memory used by the thread can be freed by the Java garbage collector”.

Inside our controllers we may push and pop additional information on and off the NDC, as well as log various events. A typical controller method in our app looks a bit like this:

    @RequestMapping(value = "/member/{memberNumber}/foostrategy", method = RequestMethod.PUT)
    public View updateFoo(@PathVariable("memberNumber") String memberNumber,
                          InputStream requestInputStream) throws IOException {

        // some processing"Updating " + strategy);
        service.update(strategy, fundCode);

        return StatusCodeView.ok();

The logged message from this controller includes the NDC info:
2009-12-02 11:00:50,806 INFO [15554445@qtp-10481519-2 PUT /member/03003522/foostrategy] [FooController] – Updating FooStrategy[memberNumber=1234….]

Where the configuration for the Appender is:
log4j.appender.myAppender.layout.ConversionPattern=%d{ISO8601} %-5p [%t %x] [%c{1}] – %m%n

Logging with NDCs has proven to be very useful for helping the team understand and reproduce errors very quickly.

Written by lizdouglass

December 9, 2009 at 10:29 am

Posted in Uncategorized

Tagged with

Passing by Name in Scala

with one comment

Last week I wrote a little Scala application on a sunny Sunday afternoon. My aim was to improve my beginner level Scala skills, including my knowledge of the language and the testing tools available.

I hit a snag on one unit test and it took the minds of Ben and Mark to help unravel the mystery…. This is the test:

import org.scalatest.matchers.ShouldMatchers
import org.scalatest.Spec
class RobotSpec extends Spec with ShouldMatchers {
  describe("A robot") {
    //other tests omitted
    it("should be able to turn to the right multiple times") {
      val robot = new Robot(1, 2, North)
      robot.toString should equal("1 2 E")
      robot.toString should equal("1 2 S")


class Robot(var xPosition: Int, var yPosition: Int, var direction: Direction) {
  def turnRight(): Unit = {
    direction = direction.getRightDirection;
  //other methods omitted
  override def toString(): String = {
    "%d %d %s".format(xPosition, yPosition, direction.abbreviation)
sealed abstract case class Direction(abbreviation: Char, leftDirection: Direction, rightDirection: Direction) {
  def getLeftDirection: Direction = leftDirection
  def getRightDirection: Direction = rightDirection
case object North extends Direction('N', West, East)
case object West extends Direction('W', South, North)
case object East extends Direction('E', North, South)
case object South extends Direction('S', East, West)

Interestingly this test fails with a Null Pointer Exception when we try to turnRight the second time. Printing out some information about the robot’s direction variable inside the turnRight method gives some clues:

On the first call to turnRight:
“direction: North left: West right: East”

On the second call to turnRight:
“direction: East left: null right: null”

Why are the left and right directions for East null? You can see that each Direction case object has a dependency on two other Direction case objects, meaning that we have circular dependencies. In order to evaluate the left direction of East we need North, which in turn references East. What’s interesting for me is how null has been used when to break the circular dependency and allow compilation to succeed.

So how did we get the test passing? The answer was to use pass by name (or call by name):

sealed abstract class Direction(abbreviation: Char, leftDirection: () => Direction, rightDirection: () => Direction) {
  def getAbbreviation: Char = abbreviation
  def getLeftDirection: Direction = leftDirection()
  def getRightDirection: Direction = rightDirection()
case object North extends Direction('N', () => West, () => East)
case object West extends Direction('W', () => South, () => North)
case object East extends Direction('E', () => North, () => South)
case object South extends Direction('S', () => East, () => West)

Now the left and right directions are evaluated only when they are used in the turnRight and turnLeft methods.

Written by lizdouglass

December 8, 2009 at 10:38 am

Posted in Uncategorized

Tagged with ,