When Scrum doesn’t seem to work

I consider myself an Agile prophet. By this, I just mean that I preach about the benefits of doing Agile software over old standards like Waterfall. There are various Agile implementations, and my favorite is Scrum. Granted, I haven’t tried any other Agile implementations in-depth other than Scrum, but I’ve had Scrum transform my life from misery to a blessing.

I’ve been at my current job for a year, doing Scrum, and the team has kicked ass. The software has a high quality. It gets out on time. Best thing of all? Both of those are accomplished without any significant overtime! Ideally, you’d have no overtime, but what I’m referring to here is maybe 20 hours of overtime over the past year. I know many devs who work that much overtime every week, and few devs who don’t put in that much overtime in a month. So, a process that gets predictability down to that close within a typical 40 hour work week is just amazing.

However, the past month have seen some changes to the team, and the work we’re doing. And we’re wrapping up our second sprint in a row where we’re going to have to pull stories (not counting those that are road-blocked by external forces). The gut initial reaction is “Oh man Scrum doesn’t work in this case, see!” but further introspection shows that we’ve lost sight of how to handle things. So I’m going to list out some things that can cause a high-performing Scrum team to go off track suddenly, as well as some other things I’ve noticed other places that have stopped Scrum from being successful.

 

You have to have top-to-bottom buy in

This is, by far, the reason I’ve seen most Scrum attempts fail. The company I’m at has top-to-bottom buy in. The developers, QA, SCM, managers, product owners, executive team – EVERYONE is on board. If you do not have complete and total buy in, your Scrum attempt is severely at risk. A big part of Agile is getting the client (whether that’s the actual client, or an internal product owner) involved into the software process. That visibility into the software process is crucial to bring understanding to all members involved in the software’s creation with the realities of what it takes to deliver software. If your managers want to back-into dates, instead of them being derived from release buckets consisting of prioritized features, you’re at risk of being driven off track, to cut corners on quality, and to work overtime. If your client wants to give you a big 500 page document of barely-thought-out requirements, then not talk to you for six months, you’re at risk of giving them garbage, or, more than likely, not really having a clue of the true needed requirements, yet you’ve agreed to a due date already.

Simply put – everyone involved in the software has to be part, and to stick to it.

 

You have to keep asking questions until you can’t ask more questions (don’t make assumptions)

Our team changed from working on a well-known area, to something new. Having high confidence from the work we’ve been doing the previous year, combined with excitement of something new, unfortunately lead us to all take assumptions. The end result? We point sized based on how something similar would have ran in the previous area. Granted – you use past experience to point size (obviously). That’s part of it. But we could’ve done a more in-depth analysis of the new work. Not doing that extra analysis lead to constant discovery of new “uh oh”s throughout the story work.

So ask questions. If you can’t get answers? Don’t point-size. If you need to do more of your own analysis first? Don’t point-size. There’s a happy balance here – you can’t spend a solid week investigating a story, when you could’ve implemented it in that time. Each story will have to be considered on a case-by-case basis, and if it’s truly unknown, create a Spike for it.

The easiest way for me to accomplish this, is to pretend that I’m starting on the story right then. Where would I start? What would I do first? Then what would I do? Chances are, you’ll find yourself with questions pretty soon. And don’t trust assumptions! Ask your product owner. After all, he/she is right there in the grooming session with you.

The goal here is to drive out as much unknown that is reasonable from a story before starting work on it. If your point size of a story is based more on unknown than on amount of work, then that story becomes a high-risk ticking time bomb, ready to explode on you mid-sprint. And when you pull in two or three like this? Then you’re really in trouble. You can’t give your product owner reliability/predictability, at which point your reputation goes down.

 

Break the story down. Then break it down again. Then? Yet again!

Our previous team and work had become a well oiled machine. We could take in a couple of 8s and a 13 in one sprint and knock it out. But those higher point sizes were driven from work to do – not unknown. Now we have new team members, doing new work – you have to get a new baseline to determine velocity. Large stories do not help you deliver a reliable velocity. If you pull an 8 point story, chances are you had 3-5 points worth of work done. You’ll finish it next sprint, sure – but now you have a large discrepancy between sprint 1 and sprint 2’s velocity. Yes, it’ll wash-out in the end, but it makes for a rougher start.

So, break down stories. It’ll feel mind-numbing. It’ll feel tedious. But high confidence and the ability to work larger stories reliably comes as a reward for doing things more in-depth earlier. If you can find a clear vertical slice to break it down more, then you should do so. Make it simpler. Drive out unknown/risk from the story. The smaller you have to implement to succeed, the higher your chance of success is. You’ll get the 3 or 5 points of what was once a bigger story done, even if you have another 3-5 in it’s “sister” story that gets pulled. But you’ve delivered more to your product owner. You have less to think about next sprint. And, you’re closer to honing in to your team’s velocity.

 

tl;dr

Upon review, all this stuff is so basic. But you get used to things flowing well, and sometimes you loose sight of the bigger picture. So, in summary:

  • Everyone must be involved and committed to the process
  • Drive out requirements as thoroughly as is reasonable
  • Break down stories into small pieces – 1-3 pointers are optimal, some 5s are okay. Stay away from 8s and 13s.

TeamCity, dedicated build server, and MVC2

Just a quick “gotcha” I ran into. For whatever reason, the standalone .NET 4.0 installation does not contain MVC bits. You only get them by installing Visual Studio 2010 itself. Why Microsoft continues to do this is beyond me (they also do it with certain web copy/publish MSBuild targets). I’m about to explode with nerd-rage at this nonsense, so I’ll quickly move on to the solution.

Since you aren’t likely to install Visual Studio on your webserver, the easy solution there is just to make the reference to System.Web.Mvc set to Copy Local. Easy enough work-around. On your build server however, that won’t do any good, because it’s not in source control. What you’ll have to do is either copy the DLL to your build server manually at the path referenced in your project, so your build server can find it, or copy it to your solution as an external reference, add it to source control, and reference it in your project from where you added it as an external reference in your solution. This is the option I took, that way we can have more build agents in the future without having to remember “oh yeah” for each machine.

Seriously, what the hell, Microsoft? Quit making shit look like it’s part of the f’in framework when it isn’t.

Exceptions? Errors? In my app_offline.htm? (Gotchas and tips!)

So, app_offline.htm isn’t a fullproof as we’d like. But just to catch up a few people who might be lost already…

What is it? How does it work?

The general idea is that you make a plain-jane html file, name it app_offline.htm, and drop it in the root directory of your ASP.NET website in IIS. IIS automagically knows that if it sees this file, that it should serve it for ALL requests. That way, when you’re doing maintenance, you can have a nice pretty message instead of an application in flux. Great idea right? Yes, it is! Too bad it doesn’t work 100% straightforward that way.

My current understanding (as in, probably wrong) is that IIS only checks for this when serving up ASP.NET content. Which I guess means that IIS isn’t actually doing it at all, but some ASP.NET framework gobbledegook is at some point. This has some pretty serious differences, the two I’ve ran into being that it’s not going to do this for static content requests, and that you have to actually get ASP.NET loaded before this file can be served.

Why I’m running into it

I’ve been working on automated build and deployment scripts for some website applications at work. I have it plugged into our CI server, and we do a lot of QA and UA on the destination environments (note: UA at this stage is done by our internal Product Owner, not an outside client). Since those environments will obviously get messed with whenever we check-in code changes, showing them a nice “under maintenance” to let them know what’s up is a good idea. They know how these servers work, but at least this way they don’t think they uncovered some bug that caused the site to start throwing errors.

We cranked out a quick app_offline.htm, and I worked it into the deploy script. The deployment script works in these steps:

  1. Copy over the app_offline.htm
  2. Delete everything else in the virtual directory (this is done so old pages/usercontrols/files/css/image/etc don’t stay around)
  3. Copy over the new build
  4. Delete app_offline.htm

However, when trying to access the site during this, most of the time you’d just get exceptions. WOW THAT’S WORKING GREAT!

How to avoid issues

First off – since the app_offline.htm done by the ASP.NET framework, it still has to be able to complete initialization. What this means is that if you have a global.asax defined in your web application (and you probably do), it still has to be able to load properly. And if you’re deleting your /bin/ folder before global.asax, then your global.asax can’t finish loading. BAM! Exceptions. This also is important when copying items back over. So, I modified my scripts to delete global.asax FIRST, then everything else. When I’m copying the new build over, I copy /bin/ first, then everything else.

But there’s more!

You’re probably also using a default document. Default.aspx is the one that’s probably actually in your project. If your site’s home page doesn’t have a specific actual page (like 99% of websites), users could just be on http://mysite.com/, and get the “Virtual listing not denied”. Why? Because you’ve deleted Default.aspx, so the ASP.NET processing isn’t happening, so your user gets the “virtual listing not allowed” error. You could add app_offline.htm to the default document rule, but that’s kind of crappy. The real solution here is not to delete Default.aspx. Instead, just copy over it.

My final deployment process

So, now it looks like this:

  • Copy over the app_offline.htm
  • Delete global.asax
  • Delete everything but Default.aspx and web.config (I know those files will ALWAYS exist – so I’m still cleaning out old garbage, but not getting nasty errors)
  • Copy over new /bin/ contents
  • Copy over the rest of the new build
  • Delete app_offline.htm
  • And now I’ve greatly reduced the change for a user to get nasty errors. Why only greatly reduced? Because if they were trying to get to static content, they’ll STILL get 404 (if the old file hasn’t been copied over yet), since the ASP.NET framework isn’t invoked. The solution for that? IIS7 and the integrated pipeline. You’ll want to add this to your web.config:

    <?xml version="1.0"?>
      <configuration>
        <system.webServer>
          <modules runAllManagedModulesForAllRequests="true" />
        </system.webServer>
      </configuration>

    Which is just yet another reason to only copy over your web.config, and not delete it. This setting tells IIS now that ANY requests in this virtual directory should fire the ASP.NET pipeline. However, our servers are IIS6, so we cannot take use of this feature. It’s low risk for us, so it’s not a large deal, but for those of you using IIS7 this is a nice thing to add, especially if you’re using MVC since none of your URLs directly link to an aspx page anyways.

    Hope you find this information useful!

    Accessing custom data interfaces in ASP.NET WebForms

    Yeah, this is a pain.

    So, you want to use “out of the box” controls like a listview, or gridview, etc. You have a custom interface for getting your data, let’s say something like:

    public interface IGetData
    {
         IEnumerable<MyData> GetAll(int pageSize, int page, string sort);
    }
    public partial class MyPage : Page
    {
      public IGetData MyData {get;set;}
    }

    That’s a pretty basic interface. You have a large database, and you need to very specifically control how paging and sorting is done (no returning everything, and sorting in memory, especially). Unfortunately, if you want to display that information with the standard webform controls, you will run into MANY issues. All your sort/paging commands will be jacked up, and attempts at manually setting what page number you’re on will leave you smashing your head into a wall. A wall with spikes (hooray for read-only properties…).

    (To be honest here, I tackled this nonsense a few months back, and don’t remember the specifics of the trouble I ran into, so feel free to try it anyways!)

    The way ASP.NET wants you to do it, is to have your (let’s say ListView) point to an object datasource. After fighting these things, I can say that object datasources are complete garbage. But, here we are anyways. Here’s what you’ll need to do to get it work correctly.

    You’ll create your ListView, and your ObjectDatasource. Have your ListView’s DataSourceID set to the name of your ObjectDatasource. Then you’ll set various properties on the datasource for what functions to call at certain events. The overall way how the ObjectDatasource is going to work? It’s going to create it’s own f’in instance of your control/page, and call those functions. Yes. You heard me correctly. That means any controls/data on your page are USELESS. Since your ObjectDatasource has it’s own instance, those values won’t exist. Awesome, right! So how do we set this up anyways? Like so…

    On your ObjectDatasource, you’ll want to set the following properties:

    • MaximumRowsParameterName, SortParameterName, StartRowIndexParameterName – These are optional. If set, when the ObjectDatasource uses reflection to search for it’s select function, it’ll look for parameters with those names.
    • SelectMethod – A name of a function in your codebehind, that has parameters equal to what you have set (in markup or codebehind) to your SelectParameters as well as the parameters listed above
    • SelectCountMethod – Function with the same parameters as SelectMethod, minus paging and sorting parameters.
    • TypeName – FQDN of your page or control
    • EnablePaging – Set to true if paging is enabled. Obvious at least.
    • OnObjectCreated – This is the bastard one. This is an event that you’ll want to wire into.

    So, we’ve set up our ListView and set up our ObjectDatasource with a bunch of functions that don’t do anything. The first thing that you’ll notice is a different function for Select and SelectCount. We don’t want to actually have to search our database twice. The good news is that Select is called first, so I made a tiny custom object to hold the records we’re going to show, as well as the total count. I modified my interface to return this object instead, as well.

    public class DataControlResults<T>
    {
       public IEnumerable<T> Data { get; set; }
       public int TotalRecords { get; set; }
    }

    Now, we, can make a private field on our page (let’s define it as “private int _totalRecords”), and we call our Select method, we’ll store it.

    public List<MyData> dsItems_Select(int pageSize, int startRow, string sortColumn)
    {
      var results = MyData.GetAll(pageSize, (int)(startRow/pageSize + 1), sortColumn);
    
      var listData = new List<MyData>();
      listData.AddRange(results.Data);
    
      _totalRecords = results.TotalRecords;
    
      return listData;
    }

    Again, unfortunately, we have to do a little wonkiness with the paging. I prefer page and pageSize, but the data source uses starting row and page size. Oh well.

    Now, our SelectCount just returns _totalRecords, so easy enough there that I can skip formatting code!

    However, if you, like most people, are using a DI/IoC framework, your actual concrete implementation of your interface will be null! Most frameworks, when in the ASP.NET world, are instantiating those properties in your global.asax, or perhaps a custom Page base control, etc. Because the ObjectDatasource, though, is creating a copy of your page/control through Activator.CreateInstance, that’s all gone! SWEET! (That’s sarcasm, by the way.) So, that’s where the OnObjectCreated event is used.

    protected void dsItems_ObjectCreated(object sender, ObjectDataSourceEventArgs e)
    {
      var newObj = e.ObjectInstance as MyPage
    
      newObj.MyData = MyData;
    }

    There ya go, now it’ll actually have a facade to use. Sigh.

    At the end of the day, it works pretty simply, but I originally just wanted to throw up a ListView control, capture the paging/sorting events from the ListView (triggered from Commands from objects displayed in the ListView), and manually call my facade with proper paging/sorting/search parameters as needed. Then I’d just bind the results to the ListView.DataSource property, and be set. But that really just isn’t how they intend on you using these controls. They really want you to use some DataSource control, which unfortunately adds nothing but more noise to your markup and code-behind. If you want to have Updates and such as well, it’s the same pattern again.

    Hopefully this approach can help someone else who wants control over their data access in their webform project! Thank god our next project is MVC!

    Story points only for things that provide business value

    This was an idea that recently came up during training. One of the large values of using Scrum is the visibility it can provide to the rest of the business organization – clients, product owners, managers, etc. It helps plainly set out what the team can handle, and identify issues that are preventing features from getting done.

    As a team gets it’s velocity defined, it may be beneficial to not assign story points to stories in an iteration related to working on legacy systems, production issues, long-standing (or recurring) bugs/defects, or things that in general derive from various forms of “technical debt”. Many managers (or product owners, etc), especially the more teams they have to track, will usually not take the time to really investigate why a team may be having troubles, and making the issues more obvious (such as a low velocity) may be a way to help clearly signal them there are issues that need to be addressed. After given the “green light” to tackle some technical debt, future iterations would see an increased velocity.

    On the other hand, you’d still probably need to story point those “non-business value” stories, because you will still want to track when they’ll be resolved, and to be able to include them in releases. Perhaps you’d run multiple iterations in parallel – one for business value, one for everything else, and assign velocity independently. Of course, that has the overhead of managing two parallel iterations, which might be problematic depending on your Scrum tracking methods.

    I think it’s an interesting idea that has some merit, but still has some kinks. I like the principals behind it, just not sure how it’s best to proceed with it.

    Does the .NET community need an attitude change?

    Maybe it’s just me. Maybe my perceptions are skewed for some reason or another. It’d be nice. And no, this is not an “Evil Rob” post.

    It seems to me that the .NET community is becoming more and more of a popularity contest. A person’s ability to “play around” with new technology, and blog/twitter/whatever a lot about it seems to be more important than providing actual working solutions. I really hope that’s not the case. Developers should be able to get job based on their ability to provide business value to the client, in whatever form that may be – which is unlikely to be by making small demos or “widgets” of functionality.

    It seems that unless you’re working with the latest and greatest technology, that any work you’re doing is useless, and you’re a subpar developer. It seems that cliques and an elitist mentality are emerging, which is unavoidable, but there’s a chance it’s becoming the prominent (or at least prominent enough) mentality in our community. Instead of being a community focused on sharing knowledge, it’s more of a one-ups-manship contest.

    The reason all this is coming to a head, is because I’d like to blog about the work I’m doing. I found myself thinking “Man, no one cares about this.”, and “Eh it’s webforms, I don’t want to catch flack for it”. Then it occurred to me that there’s no way I’m the only developer out there still dealing with a legacy webforms app, that will be facing the same challenges I have. Maintenance is a large part of what we do, and something we should always plan for. So it’s always good to know about how to make those old apps easier to work with, or to fix that random bug that pops up in software no one has looked at in a year.

    By the same point, we definitely need people blogging and trying out new things with the new tech. I know that for myself, and surely many others, a lot of the enjoyment comes from facing challenges, and finding new, efficient, and interesting ways to overcome them. Without checking out the new stuff, we’d be stuck in a rut.

    So it all matters. New stuff matters. Old stuff matters. And it’s important that we keep a good balance with everything, so we can accept and all grow together as a community, and not become a circle-jerk of assholes.

    I should also take this time to say that this is NOT an attack on any one (or group) of people. I’m not saying that “popular” people in the community are not (or can not) be competent developers producing fully functional software. This has just been my perception of how things have trended over the past couple of years. I really, really, REALLY hope I’m wrong.

    With all this said, I plan on blogging about what I’m actually doing more. I’m also very lazy, so if it doesn’t happen, it’s not because I became shy again.

    So please leave me some comments! Am I way off of my rocker here? Am I insecure because my mommy didn’t hold me when I was a child and my vagina is full of sand? Or did I hit on a few nuggets of truth?

    Designed by Posicionamiento Web | Bloggerized by GosuBlogger