How My Team Does Agile, 2014 Edition

I’ve spent a lot of time and energy over the past few years trying to get my team doing agile software development in a way that feels good to me. We’ve really come a long way, and we’re really getting close to where I want us to be.

Where We Started

When I first joined the team, I was really unhappy with our agile practices. We were running two-week sprints. Before a sprint started, we’d have two meetings: pre-planning and sprint planning. In pre-planning, we’d have 12 developers on their laptops and phones as we went through their assignments person by person. Nobody was invested in anything that anybody else was working on, and so they didn’t bother to pay attention. Everybody would leave with their individual assignments, and they’d come up with the tasks they’d work on to email to the team lead before the sprint planning meeting.

Sprint planning was even worse. We would literally watch the team lead copy/paste tasks from emails into a spreadsheet to be inserted into TFS. There’d be no input or feedback from the team on the tasks, and everybody would just sit and wait for their turn to read their tasks as they were being entered into the spreadsheet. It sounds bad, but it got worse. The cell phone use and not-paying-attention lead to a ban on cell phones and laptops, so you’d just have to sit there and try not to fall asleep.

Coming out of sprint planning, you’d have a list of tasks that you came up with that nobody paid any attention to. There was no accountability. You could probably submit the same list of tasks two sprints in a row without being questioned. But that’s not even the worst part!

The biggest problem that I saw was what I describe as a “culture of failure.” Nobody was completing their tasks in the sprint, and nobody cared. At the end of the sprint, we’d just close ’em out and make new ones, no questions asked. To this day, I can’t wrap my head around how an entire team of developers can be responsible for coming up with their own tasks with their own estimates with no questions asked and not complete them all EVER! (Deep breaths, self… Writing about the past is conjuring some bad juju, and I’m getting angry.)

Where We Are Now

So, yea. That was where we were, and I’m happy to say that we’ve come a long way. I believe we’re experiencing a lot of success today primarily because of a few key changes. We transformed a large team of INDIVIDUALS into lean execution TEAMS, we shortened our sprints from two weeks to just one, we started to focus on our backlog, and we stopped accepting work into sprints unless we believed it could be completed.

Converting our single large team into three smaller execution teams was a big challenge. We had to look at our developers and identify who might and might not work well together. I think we did a pretty good job with that since it’s been about a year, and we’ve only made one or two “trades” between the teams. In order to build the team mentality, we’re assigning work to the team instead of the individuals. The teams are responsible for determining how work is divided, and we really don’t care how it gets done as long as it gets done. Each of our three teams operates a little differently, and each of them is more functional than the big glob we had before.

But the small teams weren’t enough. We were still having problems with planning enough work to get into a sprint. The result is that halfway through, we’d have a lot of items that were blocked or no longer needed. This is mostly because we were stretching to scrape up enough work to fill the sprint, so a lot of what made it in wasn’t ready. That meant a lot of time spent working on things that we didn’t plan for or possibly not working on anything! Additionally, we’d have distractions coming up constantly that couldn’t wait for the next sprint–so that’s more items being pushed out or not worked on. Shortening sprints to one week addressed a lot of those issues. We don’t need as big of a backlog since we only need a week’s worth of work at a time. Distractions are less of a problem because we’re never more than a week away from the next sprint; it’s much easier to tell a customer than you can’t do something for a few days than a few weeks.

With shorter sprints implemented, we could focus on our backlog and ensuring that we have enough work ready to go with each sprint. This was a huge shift. Instead of asking developers what they were working on, we were giving them assignments based on project needs and priorities. If there was any question about the complete-ability of an item, we’d pull it out of the sprint and either add a task to improve its complete-ability or replace it with something else entirely.

So let’s review what we’ve got now: teams that are invested in what their members are working on and short sprints filled with items that can actually be completed. We’re still not completing 100% of our sprint work each week, but we’re having more success than we’ve ever had before.

What Comes Next

The team’s in a good place, but we’ve still got a lot to improve on. We don’t do a great job of story-writing. Our backlog has a lot of “do X” stories that don’t provide much context. Why are we doing that? What else do we need to get where we’re going? Because of this approach, we have a lot of new work that pops up at the end of the sprint as we realize that we now have to “do Y” and “do Z” before we’re done with a certain feature.

So my next focus will be on making sure we write quality stories. Let’s have non-functional stories to create the system functionality needed to complete bigger functional stories. Let’s make sure our stories have valid descriptions and clear completion criteria. Let’s scope stories so we can confidently fit them into a single sprint. Let’s identify the functional stories needed to complete a project so we can have a clear picture of what “done” means before we begin, sharpening our focus on what we’re trying to accomplish while simultaneously building a strong backlog. Yes, the future will be good!

Advertisement

Type Cover 2 Makes Everything Better

I hopped on the Surface RT train when it was first released. I had a hard time choosing between the Touch Cover and Type Cover, but I ultimately ended up going with the Touch Cover. It was getting decent reviews and seemed like a great idea. I didn’t love it right out of the gate, but I didn’t hate it, either. I tried my best to use and stick with it. “I just need to put in the hours and practice,” I thought to myself, “I’ll like it more as I get better.”

Well, it didn’t really get better for me. I was able to type at a decent speed, but it wasn’t close to what I could do with a normal keyboard. There were a lot of typos, and it was particularly annoying when entering complex passwords with special characters. Did I hit shift for that letter correctly? Guess I’ll find out… Nope.

My number one pet peeve was the lack of F-keys. Or rather, the lack of labels for the F-keys. When I got it, I had to google how to use the F-keys. That was a bad decision, and I just don’t get why they didn’t label them. So the F-keys were there, and you could count it out or use the regular number keys as a guide, but it was more thinking than I should have to do to hit an F-key.

And so I was excited to learn about Microsoft’s release of the Type Cover TWO. What? They made a second-generation Type Cover? What could they possible have changed? Well, they didn’t change much, but it was enough to convince me to give it a shot. After using it for just one day, I’m thrilled. My typing speed is WAY up, and it feels like I’m using an actual, real keyboard. Further, as I’m typing and editing this article, I’m noticing that using the arrow and navigation keys to jump around is way better, too! Yay for Type Cover!!

As far as what was actually changed, they added backlit keys and labels to the F-keys. Small changes with big impact. I love the feel of this keyboard, I’m happy that I can use it in the dark, and I’m thrilled that they labeled the damn F-keys!

If you were like me and bought a Touch Cover, and you’re only lukewarm about it and on the fence about pulling the trigger on a Type Cover, my advice is to do it. This is a definite game-changer for me. The Touch Cover is a fun idea, but I’m tellin’ you: Type Cover is where it’s at!

Unit Test Sending Email with SmtpClient

I have a workflow activity that sends email (the code for this activity can be found here), and I wanted to write integration tests using SpecFlow. This creates an interesting problem. I don’t want to simply mock everything out, but I also don’t want to require a valid SMTP server and email addresses. I also want the test to pass or fail without having to check an email inbox.

Luckily, there are configuration options used by the SmtpClient class that can be used to create files when email messages are sent. This is accomplished by adding some simple code to your application configuration file. (Source here.)

<system.net>
    <mailSettings>
        <smtp deliveryMethod="SpecifiedPickupDirectory">
            <specifiedPickupDirectory pickupDirectoryLocation="C:\TempMail" />
        </smtp>
    </mailSettings>
</system.net>

This solution is easy and it works, but it creates another problem: I want my test to run automatically on other machines. I don’t want to hardcode a path into the config file because I could run into problems with user permissions or directory structure. I found this blog post that demonstrates how to change the directory programmatically. The only thing I didn’t like about that solution is that it requires the app.config change shown above. I modified the posted solution slightly so that the configuration file section is not needed. Here’s the result:

var path = GetTempPath();

// get mail configuration
var bindingFlags = BindingFlags.Static | BindingFlags.NonPublic;
var propertyInfo = typeof(SmtpClient)
    .GetProperty("MailConfiguration", bindingFlags);
var mailConfiguration = propertyInfo.GetValue(null, null);

// update smtp delivery method
bindingFlags = BindingFlags.Instance | BindingFlags.NonPublic;
propertyInfo = mailConfiguration.GetType()
    .GetProperty("Smtp", bindingFlags);
var smtp = propertyInfo.GetValue(mailConfiguration, null);
var fieldInfo = smtp.GetType()
    .GetField("deliveryMethod", bindingFlags);
fieldInfo.SetValue(smtp, SmtpDeliveryMethod.SpecifiedPickupDirectory);

// update pickup directory
propertyInfo = smtp.GetType()
    .GetProperty("SpecifiedPickupDirectory", bindingFlags);
var specifiedPickupDirectory = propertyInfo.GetValue(smtp, null);
fieldInfo = specifiedPickupDirectory.GetType()
    .GetField("pickupDirectoryLocation", bindingFlags);
fieldInfo.SetValue(specifiedPickupDirectory, path);

Using this code, I’m able to change the email delivery method and specify the output path programmatically. In my SpecFlow test, I create a temporary directory, process and verify email files created by my workflow, and cleanup. It works like a charm!

Testing Code Paths vs. Testing Behavior

I have a colleague that’s my equal in terms of unit testing enthusiasm, but we have very different philosophies. He tends to write methods first, then test the hell out of them to ensure that all code paths have been covered and that there are no holes. I tend to code using more of a TDD workflow, writing tests for each behavior that I expect from a method and not worrying about anything else that may or may not being going on.

Both approaches are valid. As we code, we both think about things that could go wrong with our code, and we both account for those things and make sure they’re tested. At the end of the day, we both end up with relatively bug free solutions that work well. Both methods produce high levels of code coverage. although focusing test writing on code paths will likely result is slightly higher coverage since the tests.

Yes, there’s a lot that’s similar about these two different approaches, but the differences are very important. The TDD mantra is “red, green, refactor.” The idea is that you write a failing test, add code to make the test pass, and then refactor the solution to clean up and optimize. This workflow is made for behavior-based testing. You expect a certain result from the method being tested. Once it’s producing that result, it shouldn’t stop producing it due to refactoring or optimizations.

The same statement can be made for tests written based on code paths: an expected result should continue to be produced after code is optimized. I’m venturing to say that optimizations are less likely to occur with the code-first approach, though. When you write code first, you don’t write tests until you’re done. And, since you’re writing tests based on the “finished” code, it’s less likely that you’ll discover flaws. Refactoring also seems less likely for the same reason. If refactoring does occur–which it should–then there’s a different problem: code paths that were once different may now be the same. You may have unknowingly written duplicate tests! (That’s not to say that the duplicate or redundant tests are bad, but you’ll have spent time writing code that, in the end, didn’t need to be written.)

Every developer I’ve ever met has learned to code before they’ve learned to write unit tests. Unit tests are generally written in code, so it’s hard to imagine learning them in any other order. Because we learn these two things in that order, we generally learn to write unit tests by following code paths. If you’re one of those code-first-and-write-tests-later types, I urge you to step out of your comfort zone and start writing behavior-based tests FIRST. You’ll code with purpose and write meaningful tests. You’ll be able to refactor with confidence, knowing that your code’s behavior has been unaffected by your chances. Like any skill, it takes some time to get used to, but I strongly believe you’ll produce higher quality code more efficiently once you become proficient.

%d bloggers like this: