First-class testing code

As programmers, we often hear that test code should be a first-class citizen of the project, meaning that it is developed to the same standards, using the same patterns & practices as your production code. Treating your test code this way should make it easier to use and maintain in the long run. So why does test code get so little attention from developers?

I can’t speak for everyone, but it seems to me that one of the common pain points of software testing is establishing the context in which the tests are run. Before you can verify that the system behaves a certain way in a given set of circumstances, you have to create that set of circumstances. More often than not, setting up the context for a test requires more code than the test itself, and sometimes more than the code being tested. This testing code tends to be somewhat dull, and not very rewarding for the developer. It’s grunt-work, and we don’t like it.

I’ve worked in many projects with radically different approaches to building test contexts, and it seems to me that the more “clever” we try to be, the more it comes back to hurt us later on. Inheritance-based approaches give us a high degree of code reuse, but at the expense of clarity. From any given test, it can be hard to understand the context or “world” in which the test takes place because so much of it is hidden in multiple layers of base classes.

The more successful testing approaches I’ve used have all had one thing in common… simplicity. With that in mind, I’ve been trying to find an approach that provides a high degree of code reuse, but makes it easy to see and understand the context. My current approach attacks these two problems via separate, but complimentary, techniques.

Before getting into the details, I’d like to review the evolution of a typical testing framework. In most projects, the tests start off simply enough, with each test being responsible for its own context. As redundancies start to emerge, they are extracted out to methods which get shared by multiple tests (e.g. CreateAccount). When it becomes clear that some of these methods are needed by tests in other classes they are typically pushed into either a base class (e.g. EntityTestBase), or some kind of helper class (e.g. EntityTestHelper). Both of these solutions have a tendency to degenerate into a unmaintainable “God classes”, and so the Create methods eventually get divided off into their own classes (e.g. AccountTestHelper), or exposed as part of the test class for each individual entity (e.g. AccountTests.CreateAccount). These factory methods begin to sprout a lot of arguments to allow them to be used by a multitude of tests, each with slightly different requirements (e.g. CreateAccount(bool includeAddress, bool includeOrders, bool includeLineItems)). As the number of these arguments increases, they may be combined together as properties of some “options” class to make them easier to deal with (e.g. CreateAccount(CreateAccountOptions options)). This last step is as sophisticated as the test code typically gets out there in the wild, and is representative of the majority of the testing code I’ve seen.

For most cases, this is perfectly adequate, and is the approach I’ve seen used on many projects. The static FooTests.CreateAccount method is available for use by any test that happens to need an Account, and the CreateAccountOptions class makes it obvious what choices are available. Nesting options classes allows us to specify properties of children, grandchildren, etc.

What we have at this point is a method, CreateFoo, and its associated parameters, CreateFooOptions. There is a standard software design pattern that fits this functionality almost perfectly, the “Command” pattern. A typical command implementation consists of a set of parameters, and code which uses them to call a method which is usually defined elsewhere. Less “pure” implementations sometimes include the code for the method directly within the command itself, and it is this approach that I will use here.

The Command pattern can also support multi-level undo, which is particularly useful when it comes to cleaning up after integration tests. In most cases, you can simply roll back database transactions to cover your tracks. Sometimes you can’t, though, such as when entities were created by calling remote services that do not support the concept of transactions, or when the system under test creates files. In these cases, having an Undo method which can remember and delete its own test data will be very useful. Undo isn’t always needed, but it’s nice to have around sometimes.

Here is an example command for creating Address entities. I won’t go into the details of the CreateCommand base class here, but the sample project and supporting classes are available on GitHub here.

  1. public class CreateAddressCommand : CreateCommand<Address>
  2. {
  3.     private static int _id = 1;
  4.     public int AddressId { get; set; }
  5.     public string Address1 { get; set; }
  6.     public string Address2 { get; set; }
  7.     public string City { get; set; }
  8.     public string State { get; set; }
  9.     public string Zip { get; set; }
  10.  
  11.     public override void Execute()
  12.     {
  13.         if (Result == null)
  14.         {
  15.             Result = new Address
  16.             {
  17.                 AddressId = AddressId,
  18.                 Address1 = Address1,
  19.                 Address2 = Address2,
  20.                 City = City,
  21.                 State = State,
  22.                 Zip = Zip
  23.             };
  24.  
  25.             // Code here to write Address to database
  26.             // (e.g. AddressRepository.Add(Address);)
  27.         }
  28.     }
  29. }

To use CreateAddressCommand, a test would create a new instance of the command, fill in the properties the resulting Address object should have, execute the command, and extract the result. Instead of creating an Address, We’ve just created a command to create the Address. So far, this command doesn’t really do anything we couldn’t have done ourselves. In fact, all this command has done is to add a level of abstraction, and contrary to popular wisdom, it hasn’t solved anything. Stay with me, because we’re not done with it yet.

Next, we’ll build a factory to create pre-defined instances of this command. By adding simple static factory methods to the CreateAddress command, we can define any number of pre-fabricated commands of various descriptions. You could define as many of these methods for as many scenarios as you like. Just make sure to give them descriptive names. For instance, if you were building a system on top of the venerable Northwind database, you might define a CreateCustomerCommand, with a factory method called “AlfredsFutterkiste” which would return you a pre-defined Customer object with example orders, line items, and address information that more or less duplicates a subset of the real database data. Here, I’ve defined a factory methods that returns a “valid” Address by filling in the fields so as to pass object validation.

  1. public static CreateAddressCommand Valid()
  2. {
  3.     var result = new CreateAddressCommand
  4.     {
  5.         AddressId = _id++,
  6.         Address1 = GetRandom.String(1, 30),
  7.         Address2 = GetRandom.String(1, 30),
  8.         City = GetRandom.String(1, 20),
  9.         State = GetRandom.String(2, 2),
  10.         Zip = GetRandom.String(10, 10),
  11.     };
  12.  
  13.     return result;
  14. }

These factory commands could return a single object, or a customer complete with address, order history, and billing information. Each command can leverage other commands to create a usable test context. This is particularly valuable in an Agile development environment in which the definition of “Valid” may change many times as the project matures. By centralizing the code which creates objects in various states, we should be better able to adapt to changing rules by updating a single factory method instead of a lot of individual unit tests.

Address is a pretty simple “Leaf” object. It doesn’t have any children, and is completely unaware of its own parents. Lets examine a more complex example. This is what a  CreateCustomerCommand might look like.

  1. public class CreateCustomerCommand : CreateCommand<Customer>
  2. {
  3.     private static int _id = 1;
  4.     public CreateAddressCommand CreateAddressCommand { get; set; }
  5.     public List<CreateOrderCommand> CreateOrderCommands { get; set; }
  6.     public int CustomerId { get; set; }
  7.     public string FirstName { get; set; }
  8.     public string LastName { get; set; }
  9.  
  10.     public override void Execute()
  11.     {
  12.         if (Result == null)
  13.         {
  14.             Result = new Customer
  15.             {
  16.                 Address = GetResult(CreateAddressCommand),
  17.                 CustomerId = CustomerId,
  18.                 FirstName = FirstName,
  19.                 LastName = LastName,
  20.                 Orders = GetResults(CreateOrderCommands),
  21.             };
  22.  
  23.             // Code here to write Customer to database
  24.             // (e.g. CustomerRepository.Add(Customer);)
  25.         }
  26.     }
  27.  
  28.     public override void Undo()
  29.     {
  30.         base.Undo();
  31.  
  32.         // Code here to erase Customer from database
  33.         // (e.g. CustomerRepository.Erase(Customer.CustomerId);)
  34.         Result = null;
  35.     }
  36.  
  37.     public static CreateCustomerCommand New()
  38.     {
  39.         return new CreateCustomerCommand();
  40.     }
  41.  
  42.     public static CreateCustomerCommand None()
  43.     {
  44.         return null;
  45.     }
  46.  
  47.     public static CreateCustomerCommand NoOrders()
  48.     {
  49.         var result = New();
  50.  
  51.         result.CreateAddressCommand = CreateAddressCommand.Valid();
  52.  
  53.         return result;
  54.     }
  55.  
  56.     public static CreateCustomerCommand Valid()
  57.     {
  58.         var result = New();
  59.  
  60.         result.CustomerId = _id++;
  61.         result.CreateAddressCommand = CreateAddressCommand.Valid();
  62.         result.CreateOrderCommands = new List<CreateOrderCommand>
  63.         {
  64.             CreateOrderCommand.Valid(),
  65.         };
  66.  
  67.         return result;
  68.     }
  69. }

There are a few new items to discuss here. The CreateAddressCommand and the CreateOrderCommands collection allow tests to describe various child entities of the parent. The New and None factory methods are added by convention for the sake of clarity and consistency. Each command can expose as many static factory methods as needed to return command instances in a variety of pre-determined configurations such as “New”, “Valid”, or even “WithOpenOrders”. Factory methods can be defined for any situation which would see enough reuse to justify it.

Notice again that the command doesn’t actually create anything until Execute is called. The command hierarchy represents the intent to create objects, and not the objects themselves. As a result, tests have the chance to further manipulate the command hierarchy and make changes before executing it. Also, up to this point, all of the work has happened quickly, and in memory. For unit tests, this is not as much of a concern, but for integration tests this could result in significant savings by allowing tests to “prune” unneeded command branches before they are executed.

This chance to modify the plan also allows us to easily create multiple similar contexts by starting at a common starting point (such as “Valid”), and adding, removing, or changing the commands that describe it. The addition of a few more methods can make this customization even simpler. Here are some instance methods that manipulate or modify an existing command prior to execution.

  1. public CreateCustomerCommand WithAddress(Address value)
  2. {
  3.     CreateAddressCommand = new CreateAddressCommand {Result = value};
  4.     return this;
  5. }
  6.  
  7. public CreateCustomerCommand WithAddress(CreateAddressCommand command)
  8. {
  9.     CreateAddressCommand = command;
  10.     return this;
  11. }
  12.  
  13. public CreateCustomerCommand WithCustomerId(int value)
  14. {
  15.     CustomerId = value;
  16.     return this;
  17. }
  18.  
  19. public CreateCustomerCommand WithFirstName(string value)
  20. {
  21.     FirstName = value;
  22.     return this;
  23. }
  24.  
  25. public CreateCustomerCommand WithLastName(string value)
  26. {
  27.     LastName = value;
  28.     return this;
  29. }
  30.  
  31. public CreateCustomerCommand WithOrders(IEnumerable<Order> orders)
  32. {
  33.     CreateOrderCommands = new List<CreateOrderCommand>(orders.Count());
  34.     foreach (var order in orders)
  35.     {
  36.         CreateOrderCommands.Add(new CreateOrderCommand {Result = order});
  37.     }
  38.     return this;
  39. }
  40.  
  41. public CreateCustomerCommand WithOrders(IEnumerable<CreateOrderCommand> commands)
  42. {
  43.     CreateOrderCommands = new List<CreateOrderCommand>(commands);
  44.     return this;
  45. }

These commands make it easy to describe the desired object hierarchy in simple terms like CreateCustomerCommand.Valid().WithAddress(CreateAddressCommand.None()). It’s not English, but it is expressive and clear. I want to start with a valid customer, but make sure the address isn’t filled in. Again, you can define as many of these helper methods as you want.

The Command pattern takes care of the reuse problem, but hasn’t done a lot to increase the readability of our tests. Fortunately, that problem is even easier to slave. There are several BDD-style frameworks out there that seek to remedy the readability problem by enforcing the standard Given/When/Then structure. However, none of these frameworks do so in a way that I’ve been entirely comfortable with. I wanted something simpler, more streamlined, and something that uses the C# language in ways that it was originally designed to be used rather than forcing a fluent syntax where it doesn’t fit. I maintain that if you ever find yourself defining a class called “It”, you’ve probably made a wrong turn somewhere.

My current solution to the readability problem is to mercilessly apply the concept of “self-documenting code”. My test methods consist of nothing but calls to other methods, whose names all begin with “Given”, “When”, or “Then”. The resulting tests look similar to this:

  1. [TestMethod]
  2. public void CreateAddress_returns_a_valid_Address_by_default()
  3. {
  4.     Given_a_valid_Address();
  5.     When_IsValid_is_called();
  6.     Then_IsValid_is_true();
  7. }

The Given_a_valid_Address method encapsulates the creation and execution of the CreateAddressCommand, and saves the result to an appropriate backing variable. When_IsValid_is_called exercises the code we want to test, in this case the Address validation, and assigns the result to another backing variable. Finally, Then_IsValid_is_true performs the actual testing of the results of the first two methods.

There’s not really any more code to show here. It’s just a simple idea. Factor each step out to its own method with an intelligent name. You can move these methods to a base class if you want to share them across multiple test classes, but you won’t lose your way now trying to remember exactly what the context is because it’s explicitly listed out in the beginning of each test.

The sample project and supporting classes are available on GitHub here.

Posted in Computers and Internet | 1 Comment

CodeMash, Can you dig it?

Can you code, suckas?

I say, the future is ours… if you can code! Now, look what we have here before us. We’ve got the Rubyists sitting next to the Java Boys. We’ve got the BDDs right by the TDDs.
Nobody is hatin’ nobody. That… is a miracle. And miracles is the way things ought to be.

You’re standing right now with delegates from a hundred companies. And there’s over a hundred more. That’s 20,000 hardcore members. 40,000, counting consultancies, and 20,000 more, not organized, but ready to code: 60,000 developers! Now, there ain’t but 20,000 verticals in the whole town. Can you dig it?
Can you dig it?
Can. You. Dig it?

The problem in the past has been the vendors turning us against one another.
We have been unable to see the truth, because we have fighting for ten square feet of framework, our turf, our little piece of turf. That’s crap, brothers! The turf is ours by right, because it’s our turn. All we have to do is keep up the general truce.

We take over one industry at a time. Secure our territory… secure our turf… because it’s all our turf!

Apologies to Sol Yurick (see http://en.wikipedia.org/wiki/The_Warriors_(film)),
Meta-apologies to Xenophon (see http://en.wikipedia.org/wiki/Anabasis_(Xenophon))

Posted in Computers and Internet | Tagged | 1 Comment

A matter of perception

As a consultant, I have the opportunity to work at a wide range of clients. I get a lot of variety this way, and it keeps things interesting. It also gives me a chance to see patterns across different companies. Occasionally, one of them hits me as being particularly interesting. I’ve noticed that the allocation of new computers is nearly always a huge political mess, in which IT has to decide who gets the new gear. There’s lots of whining, and posturing, and a lot of hurt feelings as those who feel that they “deserve” the shiny new computer fight over it.

Occasionally, I get to work at a client that gets it right, but the vast majority of the time, I walk in to find the computer they’ve assigned me to work on is underpowered for the task. Sometimes I get a “standard” computer… whatever it is that they happen to buy in bulk from Dell. There’s actually a good reason for this, since it’s error-prone and inefficient for their IT department to maintain a random assortment of different machines. Sometimes, though, I walk in and sit down at the hand-me-down that no-one wanted any more.

My current client isn’t like this at all, they order fresh new machines for the new developers coming onto the project. A new one came in yesterday and I started setting it up with the tools we need to do our jobs. It made me think about the situation at other clients, though, and I decided that it’s a matter of perception.

When Joe, the full-time employee who’s been with the client since the dawn of time hears that new machines are coming in. This is how Joe sees things.

Joe’s Current Computer

New Computers

Yugo/Fiat 126 Maserati Gran Turismo

To a certain extent, this is fair. The newer computers are faster. They are shinier. They generally have fewer problems. There’s more to it than that, though. Let me show you how a software developer sees things. When I look at a typical user’s machine. I don’t see a rusting, broken down Fiat 126. I see a Ford Taurus, specifically the station wagon version… white, with no pinstripes.

Joe’s Current Computer

Ford Taurus Station Wagon

It’s not broken down, but it’s certainly not sexy, either. It’s functional, and everything your average user needs for hauling the kids back and forth to soccer practice or a weekend camping trip. It is, in every important respect, perfectly adequate for the average user’s needs. Now here’s I see when we look at the new quad-core machines with 8 gigs of RAM and dual 20” widescreen, flat-panel monitors.

New Computers

backhoe

Is it shiny? Sure it is; It’s brand new, right? Is it sexy? Uh… no, not really. Here’s the important part of the metaphor, though. I might need the backhoe to build a house, but I don’t need one to live in it. I’m a consultant, and people bring me into their business to build things, not use them. Developers have a completely different set of requirements.

If we developers do our jobs well, the users double-click on an icon for the thing we’ve been building, the program loads up, and it runs. As developers, we don’t spend our days clicking on icons for finished things, though. We click on a “run” button, and the whole program and all of its supporting parts get rebuilt from scratch. This happens every single time we run the program. If we’re doing things correctly, we also run an exhaustive set of unit tests several times a day. Furthermore, if we’re following a test or behavior-driven design methodology, we run the tests even more often than we run the actual program. The longer the build and test process takes, the longer we are literally being paid to watch an hourglass.

A co-worker of mine, Jon Kruger, did some math which illustrates the cost savings of upgrading developer machines. You can read it here: Why your company should buy you a new dev machine today. That’s not really the point of my post today, though. My point is that too often, the allocation of hardware is not based on who actually needs, or could make the best use of it. Allocation is more often than not driven by politics, or by some arbitrary perception of who “deserves” the new computer. I think changing how we look at the situation could make a huge difference. Maybe it’s just a matter of phrasing.

The new machine is not a Maserati, it’s a backhoe.

Posted in Uncategorized | 1 Comment

The real-world value of high test coverage

I’d like to share a real-world example of where having good test coverage has paid off for me personally. First, a little background.

I’ve explained unit testing to many different clients, and on many different projects. Some of these have been TDD or even BDD projects, and others have been more, shall we say, traditional. Regardless of the methodology, all these approaches should leave behind some fairly comprehensive tests when you’re all done, and to me, that’s one of their major benefits. We’ve had discussions on my teams before about the value of high test coverage, with some developers taking the stance that coverage numbers are meaningless. This is partially true. 100% test coverage still doesn’t mean that your code actually works the way you intended, only that it works to the satisfaction of the test suite. Of course, if you’re a T/BDD developer, then presumably you wrote your tests with the end goal clearly in mind, and the fact that the tests are passing means that you’ve achieved that goal. If this is the case, then I suppose you can be more certain than most that your code is actually correct.

Another benefit of high code coverage that doesn’t get nearly as much press as TDD’s “emergent design” benefits, is catching regression bugs. Regardless of how they were created, your tests become documentation of how your code behaved at a certain point in time. The higher your coverage, the better your odds are of noticing if you inadvertently change something down the line. In addition, your tests can detect the ripple effects that changes in component “A” might have on component “B”. Both of these benefits were demonstrated quite clearly for me this last week.

A while back, I was part of a team developing a line-of-business application for a client. We had successfully delivered the first phase of the project, and were busily working on an additional round of phase two features when suddenly, and without much warning, the client’s priorities shifted, and phase two was put on indefinite hold. At the time we had completed a pretty decent amount of the phase two features, but without all the features, the completed ones weren’t going to do much good. It was a kind of all-or-nothing release. Our in-process work was shelved, and we all moved on to other assignments.

Now, nearly two years later, the client is ready to pick up where they left off, and we’ve been called back to finish phase two. The trouble is that things are not as we left them. There’s been a lot of internal development during the interim, including some not-insignificant architectural changes. In addition, these changes were not made on the phase two branch where we had previously been working, but on another branch off of phase one. My task for the last week has been to try to pull forward as many of the finished phase two features as possible.

Fortunately, we left behind some pretty decent test coverage for both phase one and two. As I have spent the last week pulling feature after feature back from the abandoned phase two branch, I have done so with a high degree of confidence because I know the tests will alert me if I break anything in the process. Each time I pull a feature forward, and all the existing tests continue to pass, I am more convinced that we did the right thing implementing the tests that we did. On top of that, as I pull the phase two tests forward, I can see that the old features still work under the new architecture. I’ve been able to salvage months of past work in a fraction of the time because I can merge the changes in with confidence that the features are working as intended, and not breaking anything else in the process.

This isn’t to say that all the tests have passed on the first try. Code that was affected by architecture differences between the two branches will still give me an initial failure which I have to hunt down and fix, but it’s taking a lot less time than it would without the tests. The tests are dutifully drawing my attention to anything that behaves differently than it did the first time around, and for that I am truly grateful.

Posted in Uncategorized | Leave a comment

Upcoming PEX Presentation

On Wednesday, March 17th I will giving a lunchtime presentation on Microsoft Labs’ PEX (Program Explorations) project. PEX is a automated unit testing tool, and although you may not know it yet, it’s your new best friend (If you’re a .NET developer, that is).
 
More information about the event can be found at http://tinyurl.com/pex-rate.
More information about the PEX project can be found at http://tinyurl.com/pex-project.
 
Posted in Computers and Internet | 1 Comment

CodeMash is coming!

With only two weeks and change left, it’s about time I said…
 
 
It’s the first holiday of the year, you know?
Posted in Computers and Internet | 1 Comment

Code Generation Presentation

Here is a screencast of my recent Code Generation talk at Quick Solutions. This is our first recorded screencast, so we’re a little rough at first, but I think this is pretty good for our first attempt. For the curious, I was connected to a network projector while Alexei Govorine was shadowing the screen with his laptop, and running Camtasia Studio to do the capture. this worked out nicely in that it left my laptop free to run the slides and demos without having to be concerned with the recording duties. I tried capturing the screen myself earlier in the week, and found that Camtasia slowed down the demos slightly, and made some Powerpoint fades and transitions a bit choppy. Anyway, this is my first screencast, so enjoy, and if you’re one of my non-programming friends, and have no idea what I’m talking about, that’s okay… you don’t have to watch the whole thing… muggle.

  http://content.screencast.com/users/AlexeiGovorin/folders/Default/media/c3455250-0dbd-4c4a-be64-6e39a0ad4c5a/flvplayer.swf

Posted in Computers and Internet | Leave a comment

Tech Night, August 12th

I’m giving a presentation on August 12th at QSI.  I think it’s a pretty cool one if I must say so myself.  Come learn about T4 templating and what it can do for you.  What it’s doing for me at the moment is pretty flippin’ sweet, so I want to share the love, so to speak.  Don’t forget to RSVP so as to ensure a decent pizza supply.

Title: Practical Code Generation with T4

Description: T4, Visual Studio’s built-in, template-based code generation system was introduced in VS2005 and achieved limited public acceptance in VS2008. Now, with VS2010, it is set to become a first-class citizen in many .NET solutions. T4 templates can be used to automate many repetitive coding tasks, such as boilerplate framework code or proxy generation. In this session, we’ll start with the basics of code generation and advance through using T4 templates and reflection to automate the creation of customizable artifacts at virtually every tier of a typical solution.

When: Wednesday, August 12, 2009
5:30 – 7:00 p.m.

RSVP:  Anji Morey @ amorey@quicksolutions.com by noon on Tuesday, August 11!

Where: QSI Training Center
440 Polaris Parkway, Suite 500

*Friends, clients, candidates and co-workers are welcome!
*Food & beverages will be served at 5:30 p.m.

Posted in Uncategorized | Leave a comment

Who needs RowTest?

I’ve heard a lot of developers bash one unit testing framework or another because of the lack of the RowTest attribute, which I beleive originated in MbUnit, at least as far as .net developers are concerned. If it came over from the Java world, well… that’s outside of my current scope and I don’t care. What bothers me is that some developers will use the presence or absence of the RowTest attribute as criteria for accepting or rejecting an entire testing framework, when in my opinion the attribute is totally unnecessary in the first place. You can achieve the same (or better) results using code that will work in ANY unit testing framework (barring the usual attribute naming differences, of course).

Let me start by saying that if you want to run TestA with three different sets of input, I presume it’s because those three sets of input are each supposed to provoke a slightly different response form the test, otherwise why bother? Perhaps you are explicitly testing the edge cases to make sure that everything works as expected on either side of a boundary without blowing up. Let’s assume that we want to test some method that takes a lower and an upper bound, and lets you know whether they are valid.  We write a test that looks something like the following, which takes a lower and upper bound, and a Boolean indicating whether the test should succeed or fail.

[RowTest]
[Row(0,1,true)]
[Row(1,1,true)]
[Row(1,0,false)]
public void RangeValidatorTest(int lower, int upper, bool expected)
{
var target = new RangeValidator();

Assert.AreEqual(expected, target.Validate(lower, upper));
}

 

The problem here is that nothing about the input data tells me why it’s special. What is different about row 1 as opposed to row 2? It’s a contrived example, sure, but stay with me. Now do the same thing without the RowTest attribute. I hear people complaining already: “But I don’t wanna write the same test three times”. <arnold>Stop whining!</arnold>

I’m not talking about wholesale copy and paste here. you just factor the guts into a non-test method.  It’s easy, you’ll see.

public void RangeValidatorTestHelper(int lower, int upper, bool expected)
{
    var target = new RangeValidator();
    Assert.AreEqual(expected, target.Validate(lower, upper));
}

 

[Test]
public void RangeValidator_accepts_lower_less_than_upper()
{ RangeValidatorTestHelper(0, 1, true); }



[Test]
public void RangeValidator_accepts_lower_equals_upper()
{ RangeValidatorTestHelper(1, 1, true); }

 

[Test]
public void RangeValidator_rejects_lower_greater_than_upper()
{ RangeValidatorTestHelper(1, 0, false); }

It’s a little wordier, sure, but I like wordiness, personally. Notice how the actual tests each have their own intelligent names now. I can read the test results without having to “parse in” the parameters to figure out what’s going wrong with a test now.

But hey, that’s just me.

Posted in Uncategorized | 2 Comments

IoC Auto-registration

I’ve used IoC containers on several projects now, and quite frankly never want to live without them again.  They make so many things easier for me that they have become part of my way of designing.  I like deciding in one place how a class will be instantiated.  I like the possibility of making a change in one place an having all the instances of a class become singletons, even though that kind of change is rare.  The one thing I don’t like is maintaining a “registry” class, and adding configuration for each and every class to it when I have a whole category of similar classes that should all behave in the same way.  Take, for example, service proxies.  Regardless of how their endpoints are configured, the code to register them with the IoC is going to look exactly the same for each one.

Container.RegisterType<IFooService, FooServiceProxy>();
Container.RegisterType<IBarService, BarServiceProxy>();
Container.RegisterType<IBazService, BazServiceProxy>();
...

I’ve done some reflection-based “registration” of classes before for things like validators and workflow classes, and it saved me a lot of time.  The idea is simple.  Reflect over an assembly, looking for all the classes that fit a certain pattern, and add them to a list of some kind.  This is usually done at application startup, so whether you’re afraid of some fictional “performance hit” associated with reflection or not, it’s only going to happen once per run anyway, and it saves a lot of tedious typing.  It also automates the addition of new classes to the pattern so you don’t forget and leave anyone behind.

I’ve now written something similar to handle registration of similar classes with an IoC container.  This requires a base interface and a base type.  We then reflect over the assembly containing the base type, and register all classes that implement an interface derived from the provided base interface.  In other words, given IService and ServiceBase, automatically register IFooService/FooServiceProxy.  The method looks like this:

private static void RegisterTypes<TBaseContract, TBaseClass>()
where TBaseClass : TBaseContract
{
var baseContractType = typeof(TBaseContract);
var baseType = typeof(TBaseClass);
var implementationTypes = baseType.Assembly.GetTypes()
.Where(t => t.IsSubclassOf(baseType) && !t.IsAbstract);
foreach (var implementationType in implementationTypes)
{
var contractType = implementationType.GetInterfaces()
.Where(i => (i != baseContractType)
&& baseContractType.IsAssignableFrom(i)
&& !i.IsGenericType)
.SingleOrDefault();
if(contractType != null)
Container.RegisterType(contractType, implementationType);
}
}

Using it is as simple as passing the two parameters in.  Here is the static constructor from an example ServiceRegistry class which registers all the service proxies for a client application.  This example happens to be using a Unity container, but you could do the same thing with StructureMap, or my own Itty Bitty IoC if you want.  I’ve been playing with this idea of using static constructors on an XyzRegistry class for a while, and so far it’s treating me just fine.

static ServiceRegistry()
{
Container = new UnityContainer();
RegisterTypes<IService, ServiceBase>();
}

That’s it.  All my service proxies get found and registered at startup, and I don’t have to worry about adding them by hand anymore.

Posted in Uncategorized | 1 Comment