Liz Douglass

Posts Tagged ‘Scala

Things that made me go hmmmm…

with 2 comments

Over the last couple of months I’ve been working on a Scala project. There is nothing quite like immersing yourself in a language to increase your knowledge. Along the way I figure that I must have made every possible syntax mistake. This post is about three gotchas – detail of the language has tripped me up.

Method invocation with parentheses

As a general rule, it’s not necessary to use parentheses when invoking Scala methods that have either none or one argument. There are two cases where this does not apply however:

1. Default arguments:
Soon after making the switch over from Scala 2.7.7 to 2.8.0 we made our first attempt at using a default parameter. It went something like this:

object MyObject {
  def something(someArg: String = "Fred"){ println(someArg) }

class Foo() {
  MyObject something

This results in this error:

error: missing arguments for method something in object MyObject;
follow this method with `_' if you want to treat it as a partially applied function
         MyObject something

As pointed out here “If a method definition includes parentheses, when you call it you must also use parentheses. Presumably this still holds if all the arguments to the method have default values.”

2. Using println:

scala> println 10
<console>:1: error: ';' expected but integer literal found.
       println 10
scala> println(10)

According to Programming In Scala: “Note that (the no parentheses) syntax only works if you explicitly specify the receiver of the method call. You cannot write ‘println 10’, but you can write ‘Console println 10’.”


In my opinion one of the best things about Scala is the first-order functions that are defined on the TraversibleOnce trait. I’d use a couple of them on any given day. I’ve often needed to perform an operation on each key-value pair in a Scala Map. On more than one occasion I’ve started off writing something like this:

val myMap = Map("hay" -> "stack", "killer" -> "whale")
myMap.filter((key, value) => key == "killer").map((key, value) => value)

Which results in this error:

error: wrong number of parameters; expected = 1

Doing this another way using a for expression gives a better result however:

scala> for((key,value) <- myMap if (key == "killer")) yield(value)
res19: scala.collection.immutable.Iterable[java.lang.String] = List(whale)

Programming In Scala explains that the Scala compiler translates all for expressions into combinations of higher order methods:
“A for expression of the form:
for (x <- expr1 if expr2) yield expr3
is translated to:
for (x expr2)) yield expr3

and: “A for expression of the form:
for((x1, ..., xn) <- expr1) yield expr2
translates to:{case (x1,...,xn) => expr2}"

So if I want to use higher order methods as I attempted to do first, I need:

scala> myMap.filter({case(key, value) => key == "killer"}).map({case(key, value) => value})
res29: scala.collection.immutable.Iterable[java.lang.String] = List(whale)

Or the equivalent shorter version:

scala> myMap.filter(_._1 == "killer").map(_._2)
res30: scala.collection.immutable.Iterable[java.lang.String] = List(whale)


A java.lang.Boolean does not implicitly convert to a Scala Boolean. You need to convert the java.lang.Boolean into a java boolean type using the booleanValue method in order for the conversion to be possible:

scala> val javaBoolean: java.lang.Boolean = true
javaBoolean: java.lang.Boolean = true

scala> val scalaBoolean: Boolean = javaBoolean
<console>:13: error: type mismatch;
 found   : java.lang.Boolean
 required: scala.Boolean
       val scalaBoolean: Boolean = javaBoolean

scala> val scalaBoolean: Boolean = javaBoolean.booleanValue
scalaBoolean: Boolean = true

Written by lizdouglass

December 15, 2010 at 9:25 am

Posted in Uncategorized

Tagged with

Scala Multiple Constructors

with 3 comments

A couple of weeks ago I created a new type of exception for use in our application. If I’d been using Java I would have written something simple like this:

public class MyException extends RuntimeException {

    public MyException(String message) {

    public MyException(String message, Throwable throwable) {
        super(message, throwable);

The Scala multiple constructor model is different to the Java one, as described in this article. In Scala, only the primary class constructor can call a superclass constructor, meaning that it’s only possible to invoke only one superclass constructor. The code above makes calls to two constructors on the RuntimeException class. So how can we write a Scala equivalent? Fortunately one of the answers posted to this question explains a pattern for how to do this:

object MyException {
  private def create(message: String): RuntimeException = new RuntimeException(message)

  private def create(message: String, throwable: Throwable): RuntimeException = new RuntimeException(message, throwable)

class MyException private (exception: RuntimeException) {
  def this(message: String) = this (MyException.create(message))

  def this(message: String, throwable: Throwable) = this (MyException.create(message, throwable))

I think the pattern is really neat but certainly a lot more complicated that the Java equivalent. I guess this is one of the cases where using Java classes in Scala can be tricky.

Written by lizdouglass

December 15, 2010 at 9:24 am

Posted in Uncategorized

Tagged with ,

Scala Singleton Objects

with one comment

There are two types of singleton objects in scala – companion objects and standalone objects. According to Programming in Scala:

“A singleton object definition looks like a class definition, except instead of the keyword class you use the keyword object… When a singleton object shares the same name with a class, it is called that class’s companion object. You must define both the class and its companion object in the same source file. The class is called the companion class of the singleton object. A class and its companion object can access each other’s private members… A singleton object that does not share the same name with a companion class is called a standalone object. You can use standalone objects for many purposes, including collecting related utility methods together, or defining an entry point to a Scala application”

Whilst I like the pattern of separating what would otherwise be static methods in Java into companion objects, I do find at times that they are a bit unintuitive, specifically in these two cases:

1. As shown in the brief example below it’s necessary to include the name of the companion object when invoking one of its methods, even from within the companion class. Coming from a Java background and therefore not being used to having static methods in a separate object, I’ve found that I’ve on occasion forgotten to do this.

class Foo {

  def bar {


object Foo {

  def createName: String = Math.random.toString


Programming in Scala has an example with a slightly neater way of handling this situation – including an import statement at the top of the companion class:

class Foo {
  import Foo._

  def bar {


object Foo {

  def createName: String = Math.random.toString


2. Along a similar line, I have also been caught out trying to invoke the methods of a companion object via an instance of the associated companion class. The code below shows how its still necessary to call the createName method from the Foo companion object even though we have the someFoo Foo instance:

class Foo {

  def bar(name: String){


object Foo {

  def createName: String = Math.random.toString


class Consumer {

  val someFoo = new Foo


Written by lizdouglass

December 15, 2010 at 9:24 am

Posted in Uncategorized

Tagged with ,

Overloaded methods in Scala

leave a comment »

I recently discovered that it’s not possible to have default parameters on more than one overloaded method alternative in Scala. The methods that I was working on looked a bit like this:

class Bar() {
  def foo(data: Map[String, String], list: String, field: String = "username"): Boolean = { 
      //do some processing

  def foo(data: String, list: String, field: String = "username"): Boolean = {
      //do some processing

Given that it’s possible to work out which alternative needs to be called from the type of the first argument, I incorrectly assumed that these declarations would be ok . As a work-around I created another pair of overloaded methods (see below). If you know of a better alternative then please let me know.

class Bar() {
  def foo(data: Map[String, String], list: String, field: String): Boolean = {
      //do some processing
      foo(data, list, field)

  def foo(data: String, list: String, field: String): Boolean = {
      //do some processing

  def foo(data: Map[String, String], list: String): Boolean =
      foo(data, list, "username")

  def foo(data: String, list: String): Boolean =
      foo(data, list, "username")

Written by lizdouglass

December 15, 2010 at 9:23 am

Posted in Uncategorized

Tagged with

Scalatra, Scalate and Scaml

with one comment

A while ago I set about creating a webapp for monitoring the status of the various Quartz jobs that we use to keep our main application ticking. It was put together quickly using Simple Build Tool and Scalatra. I needed a templating engine and noticed that Scalatra has support for Scalate (although at the time the support was only experimental). I decided to give it a try. The first step was to add the dependencies for Scalate and Scalatra-Scalate into our build file:

val scalatraScalate = "org.scalatra" % "scalatra-scalate_2.8.0" % "2.0.0.M1"
val scalateCore = "org.fusesource.scalate" % "scalate-core" % "1.2"

I soon ran into the Scalatra issue with using Scalate and Logback and instead decided to use Scalate on its own. Referring to the Scalate Embedding Guide, I created an instance of the Scalate TemplateEngine class in a trait that is extended by my Scalatra servlet:

trait ScalateTemplateEngine {
   def render(templatePath: String, context: Map[String, Any], response: HttpServletResponse) = {
       val templateEngine = new TemplateEngine
       val template = templateEngine.load(templatePath)

       val buffer = new StringWriter()
       val renderContext = new DefaultRenderContext(templateEngine, new PrintWriter(buffer))

      context.foreach({case (key, value) => renderContext.attributes(key) = value})


Then, referring to this excellent blog post, I created an index.scaml file. The scaml file (below) is concise and reasonably readable. The only hiccups I encountered in making it were:

  1. Figuring out the syntax and correct indenting of the for loop
  2. Realising that all the values the map are automatically converted to Options (hence the many get calls).
-@ val title: String = "Scheduler"
-@ val header: String = "Scheduler"
-@ val quartzStatus: String = "No quartz status information available"
-@ val scheduledTasks: List[Map[String, String]] = Nil
-@ val quartzInformation: String = "No quartz information available"
!!! 5
    %title= title
      != header
      != quartzStatus

    %table{:border => 1}
        %td Task
        %td Status
        %td Quantity
        %td Last Attempt Start Time
        %td Last Attempt Finish Time
        %td Last Successful Run Start Time
        %td Last Successful Run Finish Time
        %td Last Error
        %td Action
      = for(scheduledTask <- scheduledTasks)
          %td= scheduledTask.get("name").get
          %td= scheduledTask.get("status").get
          %td= scheduledTask.get("quantity").get
          %td= scheduledTask.get("lastRunStart").get
          %td= scheduledTask.get("lastRunFinish").get
          %td= scheduledTask.get("lastSuccessStart").get
          %td= scheduledTask.get("lastSuccessFinish").get
          %td= scheduledTask.get("lastException").get
              %input(type="submit" name={scheduledTask.get("action").get} value="Run now")

Written by lizdouglass

December 15, 2010 at 9:22 am

Posted in Uncategorized

Tagged with , ,

Using Mockito in a Scala unit test

leave a comment »

Our project has been using ScalaTest for unit and integration testing. For some of unit tests we have been using the Mockito mocking library. Recently I was caught out on two occasions when I was writing a Scala unit test that had some mocked classes.

The first head scratching moment was caused by a verification statement like this one:

verify(myClass).createSomething(org.mockito.Matchers.eq(name), anyString)

I had added an import statement for the Matchers class and expected to be cooking with gas….. but then this error appeared:

[error] MyClass.scala:47: type mismatch;
[error]  found   : Boolean
[error]  required: String
[error]     verify(myClass).createSomething(eq(name), anyString)
[error]                                      ^
[error] one error found

Why was the eq method returning a Boolean and not the type of the name variable (ie String)? I had written similar things in Java many times before. I realised that the eq method being used is the one that is defined in the Scala AnyRef class:

def   eq  (arg0: AnyRef)  : Boolean

This was simply fixed by explicitly calling the eq method from the Matchers class:

verify(outboundEmailService).createSomething(org.mockito.Matchers.eq(listName), anyString)

A little later on I ran into another problem with a verification. Once again, I thought there was not much to it:

verify(myClass).sendToGroup(org.mockito.Matchers.eq("foo"), startsWith(name), org.mockito.Matchers.eq(something))

But then….

Invalid use of argument matchers!
0 matchers expected, 3 recorded.
This exception may occur if matchers are combined with raw values:
someMethod(anyObject(), "raw String");
When using matchers, all arguments have to be provided by matchers.
For example:
someMethod(anyObject(), eq("String by matcher"));

I eventually realised that this meant that I had not provided matchers for all the parameters. In fact the method does have a fourth parameter, but it has a default value. I hadn’t expected to need to provide a matcher for the fourth parameter but as the docs state “If you are using argument matchers, all arguments have to be provided by matchers.”

Written by lizdouglass

December 15, 2010 at 8:58 am

Posted in Uncategorized

Tagged with ,

Migrating to Scala 2.8

leave a comment »

A few months ago Scala 2.8.0 was released. We migrated our project from version 2.7.7 quite soon after the announcement. Making the switch was quite straight forward, in fact all we needed to do was change the build.scala.versions property in our Simple Build Tool file. Admittedly it took several hours to fix all the compilation errors, but once this was done we found we found we were able to make our code base more readable because of two shiny new things in particular:

1. Collections:

As I’ve written about before, we are using MongoDB for persistence. We are also using the MongoDB Java driver. This library makes heavy use of BasicDBObjects. These are the simplest implementation of the DBObject interface. This interface represents “A key-value map that can be saved to the database”. BasicDBObjects are used for more than just inserting data. In fact they are also used extensively in many of the methods defined on the DBCollection class. For example, this is the definition of the update method:

WriteResult update(DBObject q, DBObject o)

(Source: class com.mongodb.DBCollection)
(q is the query used to find the correct element and o is the update operation that should be performed)

We often use the BasicDBObject constructor that takes a Java map. When were using Scala 2.7.7 we had a library to handle the conversion of maps from Scala to Java. We ended up with a lot of repository methods that were littered with calls to asJava, like the one below:

import org.scala_tools.javautils.Imports._
import com.mongodb._
import org.bson.types.ObjectId

def updateLastLogin(personId: String) {
   val criteria = new BasicDBObject(Map("_id" -> new ObjectId(personId)).asJava)
   val action = new BasicDBObject(Map("$set" -> new BasicDBObject(Map("lastLogin" -> new Date).asJava)).asJava)
   collection.update(criteria, action)

Now in Scala 2.8 there is a new mechanism for converting collection types from Scala to Java and thankfully all the asJava calls have disappeared from our codebase. Now we have cleaner functions like this one:

def markLogin(memberId: String) = set(new MemberId(memberId).asObjectId, "lastLogin" -> new Date)


def set(id: ObjectId, keyValues: (String, Any)*) = collection.update(Map("_id" -> id), $set (keyValues: _*), false, false)

Note that we are now also using the Casbah library. The update method above is defined on the MongoCollectionWrapper in the Casbah library.


We have a Scala backend API and a Django frontend. The backend serves JSON to our frontend. We are using the Lift Json library to do both the serialising in the backend webapp, as well as the de-serialising in our integration tests. Some of our tests use the Lift Json Library “LINQ” style JSON parsing, while others use the “Xpath” style (both of which are described here):

LINQ style:

import net.liftweb.json.JsonParser._
import net.liftweb.json.JsonAST._

val (data, status) = ApiClient.get(myUri)
val json = parse(data)

val myValues = for{JField("something", JString(myValue)) <- (json \ "somethingElse")} yield myValue
myValues should have length (3)
myValues should (
    and contain("tennis")
    and contain("phone")

Xpath style:

scenario("succesfully view something") {
   val json = getMemberLiftJson(someId)
   assert((json \ "bar").extract[String] === "ExpectedString")

The Xpath style is readable and compact and we’ve retained quite a few of these type of tests in our code base. Personally I find the LINQ style quite confusing. I need to refer to the Lift project github site every time I try to use it. In Scala 2.8 there is now JSON support and we have instead been using this for parsing in the tests. Specifically, we have been using the parseFull method like so:

scenario("bar") {
    val (content, status) = ApiClient.get(someUri)
    JSON.parseFull(content) match {
        case Some(data : List[Any]) => {
            data.length should be(4)
            data should not contain("Internet")
        case json => fail("Couldn't find a list in response %s".format(json))

Go Scala 2.8!

Written by lizdouglass

November 1, 2010 at 9:48 pm

Posted in Uncategorized

Tagged with

Migrating from Maven to Simple Build Tool

leave a comment »

A while ago I moved our Scala project build from Maven to Simple Built Tool (sbt).

Why sbt?

  • sbt is made for Scala projects. The buildfile is written in Scala and is as concise as the Buildr ones that I have worked with previously.
  • sbt has several project types including the basic and web project types. Each has multiple build tasks/actions defined and all of these can be customised.
  • sbt has support for dependencies to be declared in either Ivy or Maven configuration files. All our dependencies were already specified in pom files, so not having to migrate these (at least straight away) made the transition to sbt easier.
  • sbt compiles fast. Robert first added sbt into our project so that he could compile the code more quickly than is possible in Intellij.
  • sbt has support for ScalaTest – the framework that we use for all our unit and integration tests. When we were running our ScalaTest tests as part of our Maven build we found that we needed to include the word ‘Test’ somewhere in the class name. Forgetting the requirement had cost one of our project team members several hours on one occasion.
  • We can now run both our webapp and integration tests at the same time. We’d found that it wasn’t possible to configure Maven to do this. We had instead started up a Jetty server from within our integration test project in order to run the web project.
  • sbt promotes faster development. Continuous compilation, testing and redeployment means that our work cycle is faster. This is particularly noticeable when we are working on a feature that requires us to make changes in both our Django front end and Scala backend. We can make changes in the source code in both and have both projects automatically redeploy.

Creating the sbt buildfile:

At the time of the migration to sbt our Scala backend was divided into 3 subprojects:

  • Core : A Scala project with ScalaTest unit tests
  • IntegrationTests – ScalaTest integration tests
  • Webapp : Basic webapp

Part 1: Declare all the sub-projects in the buildfile:

The first step in creating our build file was to declare all three subprojects. The core and integration subprojects are sbt DefaultProjects and have tasks such as compile and test defined. The webapp is an sbt WebappProject and has additional tasks such as jetty-run. Both the webapp and integration projects depend on the core project:

lazy val core = project("my-api-core", "Core", info => new DefaultProject(info))
lazy val webapp = project(webappProjectPath, "Webapp", info => new WebappProject(info), core)
lazy val integration = project("integration", "IntegrationTests", info => new DefaultProject(info), core)

Part 2: Getting the webapp running from sbt:

Although the webapp was declared as a Webapp project, it wasn’t possible to run it without declaring some additional Jetty dependencies. These were specified as inline sbt dependencies in the WebappProject class. This class extends from the sbt DefaultWebProject (please see below). Note that the port and context path can also be specified.

class WebappProject(info: ProjectInfo, port: Int) extends DefaultWebProject(info) {

    val jetty7Webapp = "org.eclipse.jetty" % "jetty-webapp" % "7.0.2.RC0" % "test"
    val jetty7Server = "org.eclipse.jetty" % "jetty-server" % "7.0.2.RC0" % "test"

    override def jettyPort = 8069
    override def jettyContextPath = "/my-api"

Part 3: Getting the integration tests running from sbt:

As mentioned above we need to run our webapp project at the same time as our integration test project. This is so that our integration tests can make calls to the webapp project endpoints. In addition to this our integration tests need to have a MongoDB database populated with some test data.

1) Populating the Mongo database for testing:

All our integration test classes extend from a common trait (below). This trait populates the MongoDB database at the start of each test:

trait JsonIntegrationTest extends FeatureSpec with BeforeAndAfterEach with ShouldMatchers {
    implicit val formats = new Formats {
        val dateFormat = DefaultFormats.lossless.dateFormat

    override def beforeEach = {

The init method populates the database. The name of the MongoDB database is taken from a configuration file:

object IntegrationDB extends Assertions {
    private val config = new ConfigurationFactory getConfiguration("my-api")
    val dbName = config.getStringProperty("mongo.db")
    val db = new Mongo().getDB(dbName)

    def init = {
        val testDataFile: java.lang.String = "mongodb/IntegrationBaseData.js"
        val testDataFileInputStream = getClass.getClassLoader.getResourceAsStream(testDataFile)
        if (testDataFileInputStream != null) {
        } else {
        fail("a message")

    def eval(js: String) = {

We are using The Guardian Configuration project to manage the config of the all of our Scala sub-projects. One of the features of this library is that it enables you to read properties from a number of sources. All of the properties for our projects are Service Domain Properties, meaning that they will be loaded from files on the classpath. The Configuration project library loads the properties from whichever correctly named file it encounters first on the classpath.

As mentioned above, our IntegrationTests project has a dependency on the Core project and therefore the config files of both projects will appear in the classpath. Both projects had a properties file with the same name that specified the mongo.db property. The intention was for the integration test properties to override those in the Core project. This did not work as planned because the ordering of the two config files in the classpath could not be guaranteed. Sbt does allow you to add items additional items to the classpath of your project using the +++ method. I did try to promote the integration test properties using this method (below). Unfortunately this did not guarantee classpath order either.

class IntegrationProject(info: ProjectInfo) extends DefaultProject(info) {
    val pathFinder: PathFinder = Path.lazyPathFinder(mainResourcesPath :: Nil)
    override def testClasspath = pathFinder +++ super.testClasspath

The work around was to set a system property called int.service.domain. This is used by the Guardian Configuration project to define the name of the properties file that that should be loaded from the classpath. Our integration test project now has a properties file with a different name to the one in the core project. The test action in the IntegrationTests project calls a method to switch to the integration test properties before the tests are run:

Now in the integration tests:

val useIntegrationConfig = true;
override def testAction = super.testAction dependsOn (task {setSystemProperty(useIntegrationConfig); None})

private def setSystemProperty(integrationConfiguration: Boolean) = {
    if (integrationConfiguration)System.setProperty("int.service.domain", integrationTestConfigDomain)
    else System.setProperty("int.service.domain", defaultConfigDomain)

2) Starting the main web application from the Integration test project:

As I mentioned above we’d found it hadn’t been possible to start up the webapp as well as run the integration tests using Maven. With sbt it is possible to do this. Another webapp project has was declared inside the definition of the IntegrationTests project. This sbt project has a separate output path and jetty port to the main webapp. This enables us to keep the main webapp running and run the integration tests at the same time.

class IntegrationProject(info: ProjectInfo) extends DefaultProject(info) {
    val useIntegrationConfig = true;

    lazy val localJettyPort = 8071
    lazy val localWebappOutputPath: Path = "target" / "localWebappTarget"
    lazy val localWebapp = project(webappProjectPath, "IntegrationTestWebapp",
        info => new WebappProject(info, localJettyPort, localWebappOutputPath, useIntegrationConfig), core)

    override def testAction = super.testAction dependsOn (task {setSystemProperty(useIntegrationConfig); None}, localWebapp.jettyRestart)
    lazy val startLocalWebapp = localWebapp.jettyRestart
    lazy val stopLocalWebapp = localWebapp.jettyStop

class WebappProject(info: ProjectInfo, port: Int, targetOutputDir: Path, useIntegrationTestConfiguration: Boolean) extends DefaultWebProject(info) {
    override def outputPath = if (targetOutputDir != "default") targetOutputDir else super.outputPath

    val jetty7Webapp = "org.eclipse.jetty" % "jetty-webapp" % "7.0.2.RC0" % "test"
    val jetty7Server = "org.eclipse.jetty" % "jetty-server" % "7.0.2.RC0" % "test"

    override def jettyPort = port
    override def jettyContextPath = "/my-api"
    override def jettyRunAction = super.jettyRunAction dependsOn (task {setSystemProperty(useIntegrationTestConfiguration); None})

Note that the testAction for the IntegrationTests project starts up the webapp but does not shut it down after the tests have finished. I did try several techniques for getting this to happen including the one below.  This attempt started jetty, stopped jetty then tried to run the tests. Please let me know if you have any better ideas 🙂

<pre>def startJettyAndTestAction = super.testAction dependsOn (localWebapp.jettyRun)
override def testAction = task{None} dependsOn (startJettyAndTestAction, localWebapp.jettyStop)

Written by lizdouglass

November 1, 2010 at 9:47 pm

Posted in Uncategorized

Tagged with , ,

Passing by Name in Scala

with one comment

Last week I wrote a little Scala application on a sunny Sunday afternoon. My aim was to improve my beginner level Scala skills, including my knowledge of the language and the testing tools available.

I hit a snag on one unit test and it took the minds of Ben and Mark to help unravel the mystery…. This is the test:

import org.scalatest.matchers.ShouldMatchers
import org.scalatest.Spec
class RobotSpec extends Spec with ShouldMatchers {
  describe("A robot") {
    //other tests omitted
    it("should be able to turn to the right multiple times") {
      val robot = new Robot(1, 2, North)
      robot.toString should equal("1 2 E")
      robot.toString should equal("1 2 S")


class Robot(var xPosition: Int, var yPosition: Int, var direction: Direction) {
  def turnRight(): Unit = {
    direction = direction.getRightDirection;
  //other methods omitted
  override def toString(): String = {
    "%d %d %s".format(xPosition, yPosition, direction.abbreviation)
sealed abstract case class Direction(abbreviation: Char, leftDirection: Direction, rightDirection: Direction) {
  def getLeftDirection: Direction = leftDirection
  def getRightDirection: Direction = rightDirection
case object North extends Direction('N', West, East)
case object West extends Direction('W', South, North)
case object East extends Direction('E', North, South)
case object South extends Direction('S', East, West)

Interestingly this test fails with a Null Pointer Exception when we try to turnRight the second time. Printing out some information about the robot’s direction variable inside the turnRight method gives some clues:

On the first call to turnRight:
“direction: North left: West right: East”

On the second call to turnRight:
“direction: East left: null right: null”

Why are the left and right directions for East null? You can see that each Direction case object has a dependency on two other Direction case objects, meaning that we have circular dependencies. In order to evaluate the left direction of East we need North, which in turn references East. What’s interesting for me is how null has been used when to break the circular dependency and allow compilation to succeed.

So how did we get the test passing? The answer was to use pass by name (or call by name):

sealed abstract class Direction(abbreviation: Char, leftDirection: () => Direction, rightDirection: () => Direction) {
  def getAbbreviation: Char = abbreviation
  def getLeftDirection: Direction = leftDirection()
  def getRightDirection: Direction = rightDirection()
case object North extends Direction('N', () => West, () => East)
case object West extends Direction('W', () => South, () => North)
case object East extends Direction('E', () => North, () => South)
case object South extends Direction('S', () => East, () => West)

Now the left and right directions are evaluated only when they are used in the turnRight and turnLeft methods.

Written by lizdouglass

December 8, 2009 at 10:38 am

Posted in Uncategorized

Tagged with ,

Frenzy Fixtures in Scala

leave a comment »

Following on from my post about Friday Feedback Frenzies, Mark, Ben and I looked at how to work out frenzy pairs. The idea with the frenzies is that you exchange feedback with two people each week, and that you rotate the people that you do this with. When I have organised these previously I have generally just picked people at random. Mark, Ben and I decided to generate a frenzy fixture (if you will).

Mark came up with the idea of using something like a football/tennis game scheduling algorithm. We found a Round Robin cyclic algorithm to get us started. Our first step was to get this algorithm working in Scala:

      val numberOfPlayers = 12
      for (round <- 1 to 11) {
        for (court <- 1 to 6) {
          print((if(court == 1) 1 else (round + court - 2) % (numberOfPlayers - 1) + 2),
                (numberOfPlayers - 1 + round - court) % (numberOfPlayers - 1) + 2)

The output from this is below. Each row represents a round and each bracketed pair in a row represents a match-up of two players on a court.


Next we moved from everything being 1 indexed to being 0 indexed. We also changed the fixed player, who always plays on court 0, to be the last one in the player list. This was done only because it made the loops more readable:

      val numberOfPlayers = 12
      for (round <- 0 to 10) {
        for (court <- 0 to numberOfPlayers/2 - 1) {
          print(((if(court == 0) numberOfPlayers - 1 else (court + round) % (numberOfPlayers - 1))),
                (((numberOfPlayers - 1) - court + round) % (numberOfPlayers - 1)))

Then the output was:


Our next step was to switch domains from sports to feedback frenzies:

val participants = 12
val weeks = 10
for (frenzy <- 0 until weeks) {
  for (pair <- 0 until participants/2) {
    print(((if(pair == 0) participants - 1 else (pair + frenzy) % (participants - 1))),
          (((participants - 1) - pair + frenzy) % (participants - 1)))

Then we changed to creating a collection of all the pairs:

      val participants = 12
      val weeks = 10
      val frenzyPairs = for {
        frenzy <- 0 until weeks
        pair <- 0 until participants/2
      } yield (((if(pair == 0) participants - 1 else (pair + frenzy) % (participants - 1))),
                (((participants - 1) - pair + frenzy) % (participants - 1)))

And the output…

RangeM((11,0), (1,10), (2,9), (3,8), (4,7), (5,6))
RangeM((11,1), (2,0), (3,10), (4,9), (5,8), (6,7))
RangeM((11,2), (3,1), (4,0), (5,10), (6,9), (7,8))
RangeM((11,3), (4,2), (5,1), (6,0), (7,10), (8,9))
RangeM((11,4), (5,3), (6,2), (7,1), (8,0), (9,10))
RangeM((11,5), (6,4), (7,3), (8,2), (9,1), (10,0))
RangeM((11,6), (7,5), (8,4), (9,3), (10,2), (0,1))
RangeM((11,7), (8,6), (9,5), (10,4), (0,3), (1,2))
RangeM((11,8), (9,7), (10,6), (0,5), (1,4), (2,3))
RangeM((11,9), (10,8), (0,7), (1,6), (2,5), (3,4))

As Ben quite rightly pointed out if we take two rows a time, then each person will exchange feedback with two people in the same frenzy. Our final step was to group each pair of successive rows. To do this we first moved to having two nested for loops. We thought that this would make it easier to do the grouping, but it ended up being redundant. Here our final version of the frenzy fixture generator:

      val participants = 12
      val rounds = 10
      val frenzyPairs = for (frenzy <- 0 until rounds)
                      yield for(pair <- 0 until participants/2)
                            yield (((if(pair == 0) participants - 1 else (pair + frenzy) % (participants - 1))),
                                  (((participants - 1) - pair + frenzy) % (participants - 1)))

      val (evenOnes,oddOnes) = frenzyPairs.toList.zipWithIndex.partition(_._2%2 == 0)

Our final output was:

((RangeM((11,0), (1,10), (2,9), (3,8), (4,7), (5,6)),0),(RangeM((11,1), (2,0), (3,10), (4,9), (5,8), (6,7)),1))
((RangeM((11,2), (3,1), (4,0), (5,10), (6,9), (7,8)),2),(RangeM((11,3), (4,2), (5,1), (6,0), (7,10), (8,9)),3))
((RangeM((11,4), (5,3), (6,2), (7,1), (8,0), (9,10)),4),(RangeM((11,5), (6,4), (7,3), (8,2), (9,1), (10,0)),5))
((RangeM((11,6), (7,5), (8,4), (9,3), (10,2), (0,1)),6),(RangeM((11,7), (8,6), (9,5), (10,4), (0,3), (1,2)),7))
((RangeM((11,8), (9,7), (10,6), (0,5), (1,4), (2,3)),8),(RangeM((11,9), (10,8), (0,7), (1,6), (2,5), (3,4)),9))

The end result could be tidied up a little but it does provide us with a frenzy fixture for a given number of participants. Given more time we would have used names for participants rather than just numbers.

Written by lizdouglass

November 18, 2009 at 11:52 am

Posted in Uncategorized

Tagged with ,