The case for MSBuild

I was talking to a fellow developer on Facebook (I know, that’s not twitter, wtf!) and he was asking about build and deployment issues – specifically some frustrations around MSBuild. I suggested some things to help with his issues and utilize it better. That kinda got me thinking about how it seems many people don’t know how to use MSBuild in-depth (not that I’m an expert by any means). It seems to be dismissed very early and easily as a build and deployment solution. So, I’m taking the not trodden road, the dusty path, going where no man has gone before…and blogging in favor of MSBuild…in 2011!

(Note: This post isn’t actually how to use MSBuild itself. It’s me talking about why it’s worth taking the time to actually consider/learn it.)

The Case Against

<XML />

Man, do people hate XML! Nothing more to really say about this – you either hate it or you don’t care. I don’t think anyone (well, human at least) loves XML. At least it’s not EDI, right? However – remember that writing your own MSBuild task can be as simple as inheriting from a base class and overriding Execute(). So if you want to do your whole build/deploy with a .NET language, you can, with a minimal amount of XML.

It’s not sexy

Okay, well, noone has said those words, I don’t think, but that’s what people’s faces say whenever you mention MSBuild. It’s not “the new hotness” by any means and it requires a learning curve. Any other new sexy build technologies have a learning curve too, but for whatever reason, developers (myself included) are much less likely to want to learn something when it’s an old product.

Rob wrote a post saying you should consider it

If I say it, it must already be wrong! I’m totally dooming myself to internet unpopularity! Zomgz! And anyone else who agrees with me even in the slightest will share my dark corner of internet shame. Next I’ll be posting about how awesome VB.NET is, how MSTest is soooo much more awesomer than nUnit, and how WebForms really got the web right! <CharlieTheUnicornsFriends>SHUUUUUUN!!!!</CharlieTheUnicornsFriends>*

*Oh god, I’m using XML even in my blog posts now!

It’s not updated for user friendliness…or I don’t think is…hell I don’t know!

There doesn’t seem to be a real….community around MSBuild. If new updates are happening to it, I’m unaware of them (and I follow several MS blogs..maybe it’s the wrong ones?). Either way, I do not believe it gets updates that would make consumption of it easier, or have more things “baked in”, and some of the other new build tools are more likely to have that user-friendliness and open-source community guiding the directions of their product.


The Case For

*Man, I’m making lots of “notes” in this one! Anyways, none of the “case for” necessarily mean it in a “competing-product-x-cant-do-this” fashion. I’m simply saying that MSTest CAN do these things, and a lot of people seem to think it can’t, or that it’s a big hassle to do these things with MSBuild, which is not the case.

You, and your grandma, already have it installed

“Laziness” isn’t a good criteria for choosing your toolset, but simplicity is. As long as you’re building on a Windows machine, you already have all the tools you need*. You just run it from the command prompt (or better yet, a batch file).

*note that the normal .NET installation does not have web-related targets by default however, which docks 5 internet points from MSBuild. What I’ve done is make a self-extracting exe, that you can call from a custom MSBuild target. You run it once on a given machine, and it’s set from there on out. You could probably do something similar for installing needed software for other builders too though.

You’re already using it

Like it or not, you’re using MSBuild! Until I started writing my own custom targets, I never really groked what was going on in a *.sln or .*proj file. This helped me learn more about how .NET solutions and projects are arranged, how things are coordinated “underneath the hood”. You can always read a book or a blog about these things, but I always find you can’t really understand something until you have to deal with it by hand. Knowing more about what you’re dealing with can a help a lot when you start having issues in your project/solution files, or want to add/edit post/pre build steps.

There’s a nice existing library of community addons

Chances are, if you’re thinking about doing some lower-level task (like, for example, reading XML out of a config, or starting/stopping windows services), there’s probably already a task out there. Granted, other frameworks have this too, and the MSBuild Community Tasks haven’t been updated in forever (remember, it’s old! Time to move on!), but the point is, there is a library out there that’s extremely useful and tackles a whole lot of common cases. The point being – there’s more than what’s just built-in that’s easy to get. A lot of people don’t seem to know this.

Yes, you can do that with it

Whatever “it” is, you can accomplish it with MSBuild. Again, not saying that other frameworks/methods don’t have this either – but MSBuild is extremely powerful. A lot of people seem to equate “XML = Useless”, and I just want people to understand that’s not the case. The fact that you can write your own C# library and have MSBuild run commands in it literally means “can do anything”.

Highly reusable

It’s extremely easy to factor-out common targets to reuse them across multiple projects or even multiple solutions. On a previous project we had 3 websites being deployed to 5 machines, all using the exact same target, with no repetition. You can usually factor out things into more targets and chain them or composite them together as needed in higher-level build files. Yet again – not a unique feature, just saying MSBuild supports it, and it’s easy.


What Really Matters

It’s good to remember to come back and focus on what’s important – delivering value. Build and deployment is a very serious part of any project, and unfortunately most of the time it gets cast aside, only to be rushed through at the last minute. Good agile software techniques are teaching us and the industry as a whole to consider these things up front and to make them painless. The fact that we’re even discussing it at all is a great thing! Noone really recommends “deploy from Visual Studio to prod” or “copy these files, then hand-edit that config, the don’t forget to insert these rows in the database…” as a serious technique anymore. What’s even better is that there ARE options out there!

I just hope that after this people keep an open mind about MSBuild, and realize that it’s a powerful solution that can work great on .NET solutions. It’s as “correct” a solution as alternatives such as Albacore, Rake, PowerShell, etc. The only real differences are likely to be your opinion for what fits your style/team/situation better.

So what it boils down to, again, is delivering that value.  Pick something that your team is interested in, that your company can easily support and maintain, and that efficiently works with your processes.

Focus on the team – not the product

I'm going to state something you probably already know

Agile is about the team - not about the products.

It feels like a very obvious truth to me, but I didn't realize it until recently. It was one of those things that as soon as I said it, I realized that it was extremely true. I hadn't heard it before (or it didn't stick - I'm pretty dumb, after all).

Why haven't I heard this before? Why does this even matter? This is probably due to, at least in part, always working on small teams. On the rare occasions where I've been on a larger team (6-7 developers), we had several products. Usually to the point where each developer worked on a product by themselves, or mostly by themselves.

Now I'm going to talk about more things you probably already know!


How things typically seem to go:

Traditional (typically rooted in an attempt to do Waterfall) organization in regards to software development seems to be centered around the products. There's a request for some new software. It gets added to a Product Manager's "list of products I'm PM for". The PM makes a request to the development group. The development group allocates some resources to the effort. This is done by taking one or more developers away from something else they're doing - or worse, they're asked to "split time" between multiple products. Each way has their own major problems.

If a developer is asked to split their time, they now have conflicting priorities. They can ask whoever represents the Product group which product feature is the most important across multiple products, and will get different answers. Everyone will always want whichever product they're closest too (or is the squeakiest wheel) done first. Getting a single clear direction, that doesn't change on a daily (or worse, hourly!) basis will be neigh-impossible.

If a developer is made dedicated on this new product, while at least they won't have conflicting priorities, other issues arise. That developer will likely be working in a silo. No ability to do peer reviews (other than simple adherence to coding standards possibly). Higher stress due to not being able to take sick time. Developing in a black box. Harder to bounce ideas off of peers, since no one else will know the domain. Chances are if someone else has to join in on the product later, that person will be very lost, and will likely end up needing either a lot of time learn the existing codebase, or re-writing the product.

In both cases, any existing products are damaged, as their ability to do planning will be interrupted. If they're attempting to do Scrum, for example, now their velocity will be incorrect, and will have to be recalculated. Again, if things are ran by product, and not by team, and especially if a developer is spread across products, getting any consistency will be extremely difficult, if not impossible.


How organizing around the team fixes things:

When you organize around the team, in a proper Agile environment, one of the first things you'll get is a single Product Owner. Without getting into the full definition of the PO role in Agile terms, the key part that is important in relation to this post is the fact that the PO will set the priority of work for the team. This gives a single person the responsibility of knowing what's important to each product the team supports and the relative priority among the products. This is critical.

The next thing you'll get is predictability. With the developer/QA resources (boy do I hate using that word, but stick with me) being constant, the amount of work the team can get done will be consistent - regardless of the product.

You'll also avoid silos this way. Since a team grooms stories together, everyone will be knowledgeable about the products. This gives people freedom to take time off work as needed, without being "the one guy" that knows how something works. The Company typically likes this aspect when phrased as "If we do this, then if [So-and-so] gets hit by a bus, the product isn't doomed!". If you're practicing some good policies like peer reviews, that will also help spread knowledge about the products throughout the team, as well as let everyone feel that they know all the products on that team.


Avoiding possible pitfalls:

What if it gets to where you have lots of products (typically, several smaller ones and a larger one or two)? Then you have another problem - but it's not solved by organizing around the product. Not that I'd even really want to call that a problem. If one product consumes 80% of the team's time, it may be viable to split the team at that point, so that a larger portion is dedicated to that product, and a smaller team is formed to handle the other products. Be prepared to keep the teams consistent after that point though, and don't start "borrowing" resources from other teams, or you'll end up right where you started.

Splitting the team also means getting another PO - one for each team. It might, in theory, be possible to have a single PO over multiple teams, but more than likely it'll end up with that person not being able to focus on each team as they should.

Many businesses want "on demand" time spent on their product. They want that product right then and there, and later on, interest in that product might slow down (especially once it's been launched, and initial revenue gained). That leads them to want to shuffle development around as needed. Agile is great at providing visibility in the effect these decisions have. By switching the focus from the product to the team, it allows businesses to see how it actually hurts other products, instead of creating the illusion that development is an unlimited resource. It's up to the PO to communicate back to the Powers That Be that the new product can be delivered, but that existing products will have their timelines affected. It's going to happen anyways - but when focusing on the product, there's the false sense that other products are unaffected.

Often the most important thing when selling software (or selling anything, honestly), is managing expectations. By organizing around the team, you will get consistent, predictable results for your clients. Your developers will be able to focus on the work laid out for them, instead of trying to juggle priorities and emails. If you can deliver what you say, your clients will be happier. You'll have happier developers this way too, which lead to less turnover - which will again help with more predictability, feeding back into happy clients.

There are other things I feel are beneficial as well, but this post is long enough! Suffice it to stay, focusing on the team helps solve many problems, and will prevent other issues from arising. Again, this is based on my experience, which is limited to tiny teams, or a larger team with more products than they can handle.

Decorating a generic interface with StructureMap

While at The Day Job, I noticed we had some services that all shared the same cross-cutting concerns. In particular, validating a request object. You may think of some other ones, like logging, auditing, standard error-handling, etc. Due to other architectural concerns of our project, it made sense to decorate a common interface we had as a way to achieve the validation. First, here’s our generic interface that we want to decorate:

public interface IServiceOperation<in TRequest, out TResponse> where TResponse : ServiceResult, new()
    TResponse PerformService(TRequest validatedRequest);

Simply put, it just says given a request object, I’ll return a standard response object. To show it “in action”, here’s a sample implementation. Note that the interface has already assumed at this point that the request object has been validated.

public class SignUpService : IServiceOperation<SignUpRequest, SignUpResult>
    private readonly IUserRepository _userRepo;

    public SignUpService(IUserRepository userRepo)
        _userRepo = userRepo;

    public SignUpResult PerformService(SignUpRequest validatedRequest)
        var user = Mapper.Map<User>(validatedRequest);


        using(var transaction = _userRepo.BeginTransaction())

        return new SignUpResult();

Pretty standard stuff, nothing special. Just a domain service that takes in a request, performs some domain operations, then saves changes to a repository.

Now for our decorator. If you’re unfamiliar with the decorator pattern, here’s the simple version – a decorator is a class that implements an interface, and also consumes an instance of that interface in it’s constructor. This allows you to “intercept” any calls against the passed-in instance to add extra behavior, while wrapping the consumed instance. Your consuming application doesn’t have to change it’s code at all – your IoC container will do the magic for you. You just code against the interface like normal, and everything is happy. You can even have multiple decorators (each one consuming another!).

public class ValidateServiceDecorator<TRequest, TResponse> : IServiceOperation<TRequest, TResponse> where TResponse : ServiceResult, new()
    private readonly IServiceOperation<TRequest, TResponse> _serviceOperation;
    private readonly IValidationService _validationService;

    public ValidateServiceDecorator(IServiceOperation<TRequest, TResponse> serviceOperation,
        IValidationService validationService)
        _serviceOperation = serviceOperation;
        _validationService = validationService;

    public TResponse PerformService(TRequest request)
        var response = new TResponse();
        var validationResult = _validationService.Validate(request);

        if (!validationResult.IsValid)
            response.ValidationErrors = validationResult.ValidationErrors;
            return response;

        return _serviceOperation.PerformService(request);

Easy peasy! It takes in IService<A,B>, and implements it as well. So, again – the magic needs to be handled by our IoC Container, in our case, StructureMap. StructureMap already has some great support for auto-registering open generic types with the “ConnectImplementationsToTypesClosing” function, and decorator support with the “EnrichWith” function. However, combining the two can be prolematic, as EnrichWith except your type, if generic, to be a closed generic type (meaning you know the type parameters explicitly). Obviously, we don’t know that – well, we could, but that’d mean every time we added a new request/response pairing for our IServiceOperation, we’d have to modify our container. This would lead to a lot of copy/pasted code in our container, be something everyone would forget to do, and another spot to be affected when renaming objects. Not exactly ideal.

It took quite a bit of digging, even posted the question to StackOverflow, but did not get a response. Eventually, I was able to find a solution – a custom IRegistrationConvention! If you’re not familiar with this (as I wasn’t), this is a hook StructureMap provides. By implementing this interface from Structuremap, you’ll get a hook in the form of a function that is called for every type StructureMap finds when starting up (depending on what assemblies you tell it to scan). From here, I was able to inspect each type, and see what interfaces it implemented. If the implemented interface was IServiceOperation, then through reflection I could get what two types (the response and request) were used to close the implementation, then tell StructureMap that for that closed type, to return my decorator using those same two types (with Activator.CreateInstance).

Here’s my final StructureMap code, showing it all hooked together. I hope this helps other people!

public class StructureMapServiceScanner : Registry
    public StructureMapServiceScanner()
        Scan(scanner =>
                    //Tells StructureMap to scan all types in the assmebly where the interface is defined
                    scanner.AssemblyContainingType(typeof (IServiceOperation<,>));
                    //Tells StructureMap that for all requests of IServiceOperation to automatically return
                    //the concrete type that closes the interface with the same as the requested parameters
                    scanner.ConnectImplementationsToTypesClosing(typeof (IServiceOperation<,>));
                    //And finally, we add our new registration convention
public class ServiceRegistrationConvention : IRegistrationConvention
    public void Process(Type type, Registry registry)
        var interfacesImplemented = type.GetInterfaces();

        foreach (var interfaceImplemented in interfacesImplemented)
            if (interfaceImplemented.IsGenericType && interfaceImplemented.GetGenericTypeDefinition() == typeof(IServiceOperation<,>))
                var genericParameters = interfaceImplemented.GetGenericArguments();
                var closedValidatorType = typeof(ValidateServiceDecorator<,>).MakeGenericType(genericParameters);

                    .EnrichWith((context, original) => Activator.CreateInstance(closedValidatorType, original,

And of course, if you know of another/better way to do this, please respond!

Increasing Testability With The Adapter Pattern

(Edit: I accidentally originally fat-clicked the publish button in the middle of creating this post. Sorry for the spam in your reader!)

So, you have some code that you can’t change for some reason. Maybe it’s an external library. Maybe it’s a crucial bit of code that too many things are coupled to, and changing it would be a big headache. Regardless of why, it does the work you need to do (for the most part!) but it’s not exposed in a way you like, or a way that fits 100% what you need to do. So how can you still use it? Also, how can you test that you’re using it correctly? Enter: The Adapter Pattern!


The basic idea, and then some

So, with the adapter pattern, the general goal is changing one interface to a desired interface. On top of that, this pattern will also help bring testability to your code. If you’re not familiar with TDD/BDD, or just generally unfamiliar with testing your .NET code, here’s some of the things I’ll be using in this post:

  • nUnit – Unit testing framework. You can also use MSTest (comes with Visual Studio).
  • Moq – Mocking framework. Another popular one is RhinoMocks.
  • NuGet – While not shown here, I did use this tool to add NUnit and Moq to the project. If you were thinking of going and downloading nUnit, Moq, etc manually, don’t! Use NuGet.
  • We’ll also briefly touch on the concept of Dependency Injection, although I won’t be using it here with an IoC container (like Ninject or StructureMap).


The Problem

For us, we had a base class (from an external library that we couldn’t change) that we had to inherit to make it “plug in” to another system. This base class had 2 main problems for us

  1. It requires a parameter-less constructor (so goodbye clean dependency injection)
  2. It has lots of methods on the base class that we want to use (although their parameters weren’t exactly what we wanted at times)

So, to give an example – the class (in the external library) looks approximately like this:

   1:  public abstract class UnchangeableBaseClass
   2:  {
   3:      public void PerformOperation(string generalOperationName)
   4:      {
   5:          //Magic stuff happens
   6:      }
   8:      public abstract void RunMe();
   9:  }

We have a base class we can’t modify. We have to implement RunMe() to tie it into the system. And on this base class, is a function, “PerformOperation”, that we want to take advantage of, but we don’t like passing in a string.


Our Code

So, we go to implement our class, so here’s what we end up with:

   1:  public class YourClass : UnchangeableBaseClass
   2:  {
   3:      public override void RunMe()
   4:      {
   5:          PerformOperation("KeepingItReal");
   6:      }
   7:  }

So, what problems are here? Well, it’s untestable, for one. Second, we really don’t like passing in that string.  Let’s try to write a test (and you’ll see how far we don’t get!) (Note: In good TDD fashion, you’d start with trying to write your test, which would help drive your design to testable code. But the goal of this post is to explain the pattern, which I think will be better served in this order. I’m probably wrong.)

   1:  [Test]
   2:  public void WhenRunning_ItShouldPerformOperation()
   3:  {
   4:      var sut = new YourClass();
   6:      sut.RunMe();
   8:      //Assert....uh...what?
   9:  }

Wow. Didn’t get far there at all! It’s time for…THE ADAPTER PATTERN!


Patterns to the Rescue!

Let’s start with what we WANT the code to look like. And by that, I mean let’s define the interface we wish we were consuming. 

   1:  public interface IPerformOperation
   2:  {
   3:      void PerformOperation(Operations operation);
   4:  }
   6:  public enum Operations
   7:  {
   8:      KeepingItReal,
   9:      KeepingItMostlyReal,
  10:      MauryPovich
  11:  }

Pretty simple! Now, lets write our actual adapter, with comments to help explain!

   1:  / Our Adapter implements the interface we want to work against.
   2:  // Also, we name our class with "Adapter". When using patterns, this helps
   3:  // the team to work with a common terminology, and to explain complex ideas simply
   4:  public class UnchangeableAdapter : IPerformOperation
   5:  {
   6:      private readonly UnchangeableBaseClass _baseClass;
   8:      // Since we're translating one interface to another, we're going to have to actually
   9:      // have a reference to the class that implements the original interface
  10:      public UnchangeableAdapter(UnchangeableBaseClass baseClass)
  11:      {
  12:          _baseClass = baseClass;
  13:      }
  15:      // This is our implementation of our new interface. It simply
  16:      // acts as a pass-through to the original version, along with translating our parameters
  17:      public void PerformOperation(Operations operation)
  18:      {
  19:          _baseClass.PerformOperation(operation.ToString());
  20:      }
  21:  }

Alright, now we’re getting somewhere! Lets get back to that test – I bet we can get somewhere now, with the help of Moq!

   1:  [Test]
   2:  public void WhenRunning_ItShouldPerformOperation()
   3:  {
   4:      // Moq simply asks for what interface we wished our object used
   5:      var operationPerformerMock = new Mock<IPerformOperation>();
   7:      // Now, due to the parameter-less constructor constraint, we'll have to do something
   8:      // interesting in our implementation, but we need this for testing. There's multiple
   9:      // ways we could deal with this problem, but the one I use involves multiple constructors
  10:      var sut = new YourClass(operationPerformerMock.Object);
  12:      // Oh, and "sut" means "System Under Test" :)
  13:      sut.RunMe();
  15:      // This Moq command verifies that our class (YourClass) calls the PerformOperation method on the interface
  16:      // properly. We don't care about the parameter at this point.
  17:      operationPerformerMock.Verify(o => o.PerformOperation(It.IsAny<Operations>()));
  18:  }

Sweet! Now we have a failing test (and not compiling, actually – we don’t have that constructor!). So what now?

Finishing our class

We’re almost there! Home stretch!

   1:  public class YourClass : UnchangeableBaseClass
   2:  {
   3:      private readonly IPerformOperation _operationPerformer;
   5:      // Our parameter-less constructor, as needed by the other system.
   6:      public YourClass()
   7:      {
   8:          // Here, we get our concrete adapter. This example is trivial,
   9:          // so nothing more is required, but if you're interested in better
  10:          // software design, I'd recommend checking out the Factory pattern 
  11:          // and Inversion of Control libraries, like Ninject or StructureMap!
  12:          _operationPerformer = new UnchangeableAdapter(this);
  13:      }
  15:      // We expose this constructor to allow for testing!
  16:      public YourClass(IPerformOperation operationPerformer)
  17:      {
  18:          _operationPerformer = operationPerformer;
  19:      }
  21:      public override void RunMe()
  22:      {
  23:          // Let's simple call the function on our interface!
  24:          _operationPerformer.PerformOperation(Operations.KeepingItReal);
  25:      }
  26:  }

Nice! Let’s run our test…and it passed! Thanks to the adapter pattern, we can now interact with an interface that matches our needs, and gives us the testability we desire in our software.


Interested in more?

Want to check out this sample yourself? Head over to gitHub!

If you want more information on design patterns, I cannot recommend the Head First Design Patterns book enough. It’s the best down-to-earth book, to really help you learn not only the pattern, but help identify when to apply them. It’s like “Design Patterns for Dummies” – hell, if I can understand it, anyone can!

Designed by Posicionamiento Web | Bloggerized by GosuBlogger