Writing software involves…

Writing software involves figuring out something in such incredibly precise detail that you can tell a computer how to do it.

Taken from a blog post by Dan Milstein: Coding, Fast and Slow: Developers and the Psychology of Overconfidence

Movie Rentals on Surface RT

I haven’t done much travelling since getting my Surface RT, but this weekend I had a pair of good, long flights. I decided to occupy myself with movie rentals from the Surface/Xbox video app. I can’t really articulate why, but I’ve been hesitant to rent movies to watch on devices in the past. Maybe it’s just because I didn’t I didn’t have a good device for viewing. Regardless, I decided to give it a shot, and I’m pretty happy with the result.

What I Liked

My actual viewing experience on the plane was awesome. The picture was great, and I had an okay pair of noise-cancelling headphones that did an admirable job of drowning out the sounds around me. (Other co-workers on the flight were complaining about a screaming kid that I was completely unaware of.)

I rented two movies, one for $4.99 and one for $5.49. Five bucks seems like a reasonable price for relatively new releases, so I was happy with that, too. I also liked that I was given a rental period of 5 days or 24/48 hours after viewing began. That allowed me to download the movies to my device well ahead of time so they could be ready and available when I needed them on my flights.

What I Didn’t Like

My only complaints are ones not actually related to the device. When you go to rent a movie through the Xbox video store, you’re presented with 4 options and 4 costs: HD streaming, HD download, SD streaming, and SD download. I’d rather have just two choices (HD and SD) and be given the option to stream or download based on my situational needs, which may change. For example, maybe I’d like to re-watch a previously downloaded movie that’s been deleted via streaming. Regardless, I needed one of the download options since I was planning on watching from a plane, and I opted for HD. At some point, I’ll probably try an SD download just to see what the quality difference looks like. I suspect I’d be happy with SD but didn’t want to regret not spending the extra $1. I have the 32 GB Surface with approximately 10 GB of free space available. Downloading an HD rental for offline viewing takes up about 7 GB, so I can really only have one movie at a time. So HD vs. SD is more about the download time and disk space than saving money. I should probably just pick up a MicroSD card, though.

Support the Fight Against Epilepsy!

summer-stroll-logo_web_version

I will be walking in the 2013 Summer Stroll for Epilepsy this May in memory of my niece, Kearra, who we lost earlier this year. This is the first time I’ve participated in a fundraising event like this, and I have a very modest personal goal that I’m trying to meet. Please consider donating to support this wonderful event.

Click here to visit my personal donation page. Any donations, big or small, will be greatly appreciated.

Read more about the event here.

Read more about the Epilepsy Foundation of Michigan here.

Read more about epilepsy here.

 

 

Six Ways to Parse and Reformat Using Regular Expressions

The other day, I was consulted by a colleague on a regular expression. For those of you that know me, this is one of my favorite consultations, so I was thrilled to help him. He was doing a simple parse-and-reformat. It warmed my insides to know that he identified this as a perfect regular expression scenario and implemented it that way. It was a functional solution, but I felt that it could be simplified and more maintainable.

I’ll venture to say that the most straightforward way to do a regular expression parse-and-reformat for a developer that’s not familiar with regular expressions (You call yourself a developer..!?) is by creating a Match object and reformatting it.

1. Using a Match object

var date = "4/18/2013";
var regex = new Regex(@"^(\d+)/(\d+)/(\d+)$");

var match = regex.Match(date);
var result = string.Format("{0}-{1}-{2}", 
	match.Groups[3], 
	match.Groups[2], 
	match.Groups[1]);

Console.WriteLine(result);

You can accomplish the same task without creating a Match object by using the Replace method. There is a version that accepts a MatchEvaluator–which can be a lambda expression–so you can basically take the previous solution and plug it in.

2. Using a MatchEvaluator

var date = "4/18/2013";
var regex = new Regex(@"^(\d+)/(\d+)/(\d+)$");

var result = regex.Replace(date, 
	m => string.Format("{0}-{1}-{2}", 
		m.Groups[3], 
		m.Groups[2], 
		m.Groups[1]));

Console.WriteLine(result);

That’s a little bit better, but it’s still a little verbose. There’s another overload of the Replace method that accepts a replacement string. This allows you to skip the Match object altogether, and it results in a nice, tidy solution.

3. Using a replacement string

var date = "4/18/2013";
var regex = new Regex(@"^(\d+)/(\d+)/(\d+)$");

var result = regex.Replace(date, "${3}-${1}-${2}");

Console.WriteLine(result);

I have two problems with all three of these solutions, though. First, they use hard-coded indexes to access the capture groups. If another developer comes along and modifies the regular expression by adding another capture group, it could unintentionally affect the reformatting logic. The second issue I have is that it’s hard to understand the intent of the code. I have to read and process the regular expression and its capture groups in order to determine what the code is trying to do. These two issues add up to poor maintainability.

Don’t worry, though. Regular expressions have a built-in mechanism for naming capture groups. By modifying the regular expression, you can now reference the capture groups by name instead of index. It makes the regular expression itself a little noisier, but the rest of the code becomes much more readable and maintainable. Way better!

4. Using a Match object with named capture groups

var date = "4/18/2013";
var regex = new Regex(
	@"^(?<day>\d+)/(?<month>\d+)/(?<year>\d+)$");

var match = regex.Match(date);
var result = string.Format("{0}-{1}-{2}", 
	match.Groups["year"], 
	match.Groups["month"], 
	match.Groups["day"]);

Console.WriteLine(result);

5. Using a MatchEvaluator with named capture groups

var date = "4/18/2013";
var regex = new Regex(
	@"^(?<day>\d+)/(?<month>\d+)/(?<year>\d+)$");

var result = regex.Replace(date, 
	m => string.Format("{0}-{1}-{2}", 
		m.Groups["year"], 
		m.Groups["month"], 
		m.Groups["day"]));

Console.WriteLine(result);

6. Using a replacement string with named capture groups

var date = "4/18/2013";
var regex = new Regex(
	@"^(?<day>\d+)/(?<month>\d+)/(?<year>\d+)$");

var result = regex.Replace(date, "${year}-${month}-${day}");

Console.WriteLine(result);

TDD Helps You Write Software That Makes Sense

This week, I was working on a project with a co-worker that has very few automated tests. There’s a SpecFlow project that does some integration testing, but it’s really clunky for fine-tuning and bug fixing. The problem with this project is that it’s a classic example of software built without a proper design and without any consideration for how it would be tested. The result is that you’ve got a huge mess of code with no intuitiveness.

Think About How You’d Expect the Software to Work

One of the things I’m very good at is writing software that works the way I expect it to. If I have some settings that are stored in an XML file, I think to myself, “iTunes wouldn’t require me to edit an XML file to change these settings. They’d give me a way to configure it through the UI.” And so, I build a way to configure my settings through the UI because that’s what I would expect a good application to do.

I apply the same logic to objects that I create in a class libraries. If there’s a piece of functionality that the class needs to provide, I provide a simple way to access it. I don’t expose a bunch of methods that need to be called in a particular order when it’s not necessary.

I wanted to simplify the messy project described earlier to make testing easier and more efficient. As I was thinking about how to accomplish this, I found myself asking the question, “How would I expect this to work?” And then, I would answer myself, “Well, I want to pass in <foo> and get back <bar>.” With that thought, a light came on: that’s a test–a test that should have been written before anything else was coded.

Focus On What You’re Trying to Accomplish

If the original developer would have taken a minute to think about how the software should work and write tests accordingly, this project would be in much better shape. In its current state, it’s mostly functional, but there is no way to hammer out the last few bugs other than by performing manual, end-to-end testing. As items are fixed, there is nothing in place to ensure nothing was broken. There is also no way to ensure that fixing future issues will not re-break the item that was just fixed.

A lot of developers struggle with writing tests first, and I think it’s because we learn to write tests by writing code first. “I can’t write the test because the methods don’t exist.” You’re thinking too implementation-y about the test. You’re writing a method. It should do one thing. What is that thing, and how can you test it? That’s the test you write. “But the method does way more than that!” Well, it’s probably a bad method. Whoever wrote it should have taken a minute upfront to think about what they were trying to accomplish with that method before creating a huge, unmanageable mess!

Make It Intuitive

I like “clean” and “simple” as characteristics of good software, but above all-else it needs to be intuitive. It’s not intuitive to browse the hard disk to find an XML configuration file to change configuration settings. It’s (generally) not intuitive to instantiate a class and call five different methods in order to accomplish a single task. The simplest thing you can do when creating a new class is to list out what you need from the class. Don’t worry about all the methods you’ll need to implement that functionality, just focus on the functionality.

Let’s say you have a requirement that the class needs to do X. So, how about creating a method called DoX? Now, you’re ready to write a test: DoX_DoesX. It’s easy to get straight to the meat for testing purposes, and you’ve exposed a clean, intuitive interface. What does DoX do? It does X.

As you build out functionality, continue to take time to think about what the object you’re creating should do. Maybe DoX needs some settings from the registry, but reading settings from the registry is not part of X. Don’t just shrug and put a bunch of registry stuff in your method! That adds unnecessary overhead for testing and, at the end of the day, it has nothing to do with the functionality that you’re interested in achieving with that method. Instead, create an ISettingsProvider interface and implement a RegistrySettingsProvider class to be used by your object, or simply provide the object with settings retrieved elsewhere in the application in its constructor.

Remember, hard to test usually means hard to use. Writing tests first help ensure that your code can be consumed easily and intuitively. If you find that your code is difficult to test, there’s probably a way to simplify to make it easier and improve the design.

Extra McNugget Sauce, Scope Control, and Expectations Management

One of the biggest challenges that my team faces when working with customers on a software development project is controlling scope. These projects begin with a contract followed by a formal requirements document that must be signed by the customer prior to beginning development work. We’re realistic about this process; we don’t expect that every requirement will be correctly identified upfront, and we’re willing to work with the customer throughout the development process to ensure that their needs are met.

Occasionally, we’ll find ourselves working with customers that keep pushing scope, little by little, until the project has been stretched so far beyond the original requirements that we’re not sure how we got there. Kudos to that customer for getting some serious bang for their buck, but at some point we, the development team, need to draw the line. The problem in a lot of these scenarios is that we’ve given and given and given with little or no resistance. We’ve set the expectation that if they ask for something, we’ll give it to them. We can find ourselves with a customer that’s unhappy about being cut off despite delivering a lot more than was originally bargained for.

This scenario has two major flaws. There’s obviously the issue of scope control, but expectations management for the customer is equally problematic.

I like to use a McNugget sauce analogy here. If you go to McDonald’s five times and get an extra sauce with your McNuggets for free, you’re happy. But then, on the sixth visit, maybe you get charged for the extra sauce because it’s the restaurant’s policy. This would upset a lot of people. “This is an outrage! I come here every Tuesday, and every time I get an extra sauce. I have never been charged for it before.” Rather than being happy about getting the extra sauce for free the first five times, they’re upset about not getting it for free the sixth time. However, if the McDonald’s employee were to let you know, “Hey, we’re supposed to charge for extra sauce, but I’ll let you have it for free this time,” then you’re less likely to feel like you’re being unjustly charged when you’re eventually asked to pay.

The same philosophy can be applied to our software development projects. When the customer makes that first seemingly innocuous, out-of-scope request, let them know that you’re doing them a little favor. “This request is out of scope, but I can see where it would be valuable. I’ll discuss this with the team to see if we can fit it in.” If you decide to do it, be sure to let them know that you’re making an exception this time. Finally, document that you gave them some “extra sauce” so that when/if you need to push back on a request, you can show them everything they’ve already gotten for “free.”

Getting back to the McNugget sauce analogy, some folks would still probably be upset about being made to pay even when they’ve been notified that they should be charged for the sauce they’re getting for free. “I know it’s not supposed to be free, but this is the first time I’ve ever been asked to pay. Get me the manager!” In response to that, I’d say an equally valid takeaway from this article is, “Do not give away [too much|any] extra sauce for free.”

Change Code Analysis Rule Sets in Visual Studio 2012

Code Analysis is a great way to identify problems with your code and enforce standards. By default, Code Analysis in Visual Studio 2012 uses a rule set called Microsoft Managed Recommended Rules. It’s a set of rules that Microsoft has deemed most important, and they suggest following these rules in all projects. Here’s their description of the rule set:

You can use the Microsoft Managed Recommended Rules rule set to focus on the most critical problems in your managed code, including potential security holes, application crashes, and other important logic and design errors. You should include this rule set in any custom rule set that you create for your projects.

I like this rule set, but it’s not enough for me. I suggest using Microsoft’s Basic Design Guideline Rules rule set. The MSDN description suggests, “You should include this rule set if your project includes library code or if you want to enforce best practices for code that is easy to maintain.” Best practices for code that is easy to maintain? Who doesn’t want that!?

I can tell I’ve sold you on this rule set, and now you’re wondering how to actually use it. Don’t worry, it’s easy!

  • Open the Code Analysis window in Visual Studio (View > Other Windows > Code Analysis)
  • Click the ‘Settings’ button
    CodeAnalysis_Settings
  • Click the rule set value next to the project(s) that you want to change
    CodeAnalysis_RuleSets

That’s it! Now, when you run Code Analysis, it will use the selected rule set.

Tips for Better User Stories

One of the things that my development teams have struggled with for the better part of a decade is story writing. The commonly accepted format for a “good” story is As a <role>, I need <feature> in order to <benefit>. When you’re new to story writing, this format feels forced and awkward. Additionally, as a developer I think it’s really difficult to not think about features in terms of implementation details, and I think that’s been the primary source of countless poorly-written stories over the years. I think I’m finally starting to get it and feel confident about stories that I produce, and I figure that’s probably a good cue to share my experiences. So, here we go!

Commit

Like many things, I think step one is to simply commit to doing it. Accept that it’s going to feel forced and that you will probably fail many times. Very few people get it right on the first whack. You’ll get better over time with practice, so you just need to stick with it. Try to be self-aware and learn from your mistakes. Identify what made a story good or bad once you’ve completed it, and use that information to improve over time.

Feature-Complete Requirements

A good story communicates two things: a feature or business requirement and a benefit. You need to have those two things. If you don’t have a requirement, what are you trying to accomplish? If you don’t have a benefit, why are you doing it? If we look back at the user story template, there’s a third element: role. This is who you’re doing it for. This is important because it lets you know who the target audience is.

As an example of why role is important, consider this story for a client-server application: “When an error occurs, information about the error should be logged.” Seems simple enough, but who’s going to be looking at the log? What information do they need? The implementation details for “As a system administrator…” will be very different from “As an end-user…” Additionally, if you need to verify the correctness or completeness of a solution, the two audiences may have different opinions. Knowing which role a feature is intended for helps you determine which opinion should carry more weight.

The final characteristic of a good story that I’d like to discuss is feature completeness. I’ve worked on may projects where we build all of the business objects, then build all of the data access, and then build all of the translators in order to data in a third-party format. There are a few problems here. It’s generally too big to do all of that in a single iteration. I think of the process as a workflow that moves through the layers from left to right, and the different layers seem like good seams at which to break it into smaller pieces. So, a lot of developers will break this workflow into stories horizontally: a story for creating business objects, a story for building the data access, and a story for translation. The danger with that approach is that upon completing each layer, you have provided no value until the final layer is complete. Issues with layers developed early in the process may not manifest themselves until later in the process.

Instead, it is better to break the workflow up vertically. For example, create simplified business objects with minimal details, a smaller data access component to populate the simplified objects, and a translator that outputs the available data. At this point, you’re able to demonstrate the full end-to-end process despite not being close to finished with development. And that takes us to the next point…

Get Feedback

The sooner in the development process a problem is identified, the cheaper it is to fix. Think of the example project above. If I build my entire collection of business objects, all of the required data access, and massive translators before getting feedback from my users, there is a lot of risk. Maybe I’m not providing enough data, or–perhaps worse–maybe I’m providing too much. Maybe there was a misunderstanding about what triggers the workflow and the whole process is flawed. It’s also possible that I got it right, but it’s more likely that I missed the mark by at least a little bit.

Looking at the flip-side, if I were to get a barebones, end-to-end process in place and demo it to the customer, we can immediately verify that the workflow is correct. “Look, user, when I do X, it produces Y. Is that correct? It is? Great. Now let’s talk about what Y should look like.” In the event that I have done it incorrectly, I’ve only lost the time it took me to complete that first feature. I haven’t lost any time building out code that I assumed would be needed assuming my previous assumptions were correct. (Assumptions=risk.)