Write Tests First–But Not ALL Tests First

I’ve been preaching hard about test-driven development and the importance of writing tests first. I can feel the culture beginning to shift as people are slowly starting to buy in, but I had an interesting discovery yesterday.

I was invited to a meeting by some developers that wanted me to walk through how I would’ve used test-driven development to write tests and develop a new project that they had recently started. It was essentially a data validation project that retrieved data from the database, checked it against a set of rules, and recorded any violations. We were reviewing a Controller class that was responsible for orchestrating the operation.

“Okay,” I said, “What is this thing supposed to do?”

The developers told me it retrieves records, validates each record, and saves the validation results. So, without knowing anything more than that, I figured there were at least two external dependencies: an IDataAccess responsible for retrieving records and saving the result and an IValidator that does the data validation/rule-checking. I drew a diagram on the whiteboard to show the relationships between the Controller and these two components.

I explained that since we know the dependencies and how we expect them to be used, we can begin to write tests. We also need to know how our application should react when the dependencies are missing. I started to rattle off some tests:

  • ProcessRecords_NullDataAccess_ThrowsArgumentNullException
  • ProcessRecords_NullValidator_ThrowsArgumentNullException
  • ProcessRecords_DataAccessReturnsNonEmptyList_ValidatesEachRecord
  • Etc.

The group was with me, but they quickly shifted focus to what tests were needed for the DataAccess class. And the tests for its dependencies. And everything else.

“Whoa, whoa, WHOA. None of that matters for this. All we care about is this method,” I say.

“Well, yes, but we want to do test-driven development. We thought the goal was to have all of our tests written first so we can go back and implement them.”

That’s when I had my epiphany. When I’m telling people to write tests first, they think I mean write ALL tests first. This is not the case! It would be virtually impossible to think about every code decision and execution path for an entire method/class/application upfront, and I think that’s where there’s been a disconnect. I can look at the finished code and come up with all the tests, but there is no way I could’ve come up with every single test for every single method before ever writing any code.

I went to another small team of developers and asked them if they also thought I meant “all tests first.” They did. It’s disappointing to know that I was sending the wrong message, but I’m glad I have something to address that will hopefully result in more passengers on the TDD train.

When you’re getting started with test-driven development, don’t try to write every single test first. Don’t even try to write as many tests as you can think of. You just want to write tests as you go. What does this method need to do next? Write a failing test, then write the implementation to make it pass. Red, green, refactor, baby! I’m also exchanging my “tests first” mantra for a new one: “test as you go!”

The Pitter-Patter of Little Feet in the Office

Our yearly batch of summer interns arrived last week. I always struggle with how to get a large group of fresh faces up to speed quickly. The challenge is coming up with an agenda that keeps everybody busy with educational, engaging activities without overloading them. This year, I did my best to plan a mix of product demos and classroom-style programming lectures. This year went smoother than years past, but I’d say it was still only marginally successful.

We have a suite of products that I wanted to expose the group to. So, each day, I scheduled a 1-hour demo with a different application. The interns liked seeing the different products but felt that the demos went too deep and provided too much redundant information. Fair enough. I’m usually hesitant to do too many product demos with new hires because it’s so much information that very little is retained. Instead, it might be better give the demos at a much higher level. So that’s lesson number one: talk about the purpose of the applications and their primary modules instead of going through each module and all of its sub-modules.

The other half of my “curriculum” revolved around our development tools, processes, and practices. I covered a different topic each day: Monday was an overview of tools (Visual Studio, SQL Server Management Studio, etc.); Tuesday we talked about object-oriented programming; Wednesday was an introduction to unit testing; Thursday was test-driven development and unit test mocks; and Friday was agile development. I wasn’t sure how high or low level to go with a lot of these because I wasn’t sure how much exposure the interns had to any of these topics.

The tools overview was probably the least useful session. I know that most colleges aren’t working with .NET and Visual Studio, so I wanted to go through the IDE and cover how to do basic tasks as well as highlight some of the less obvious features that I like. The problem here is that the basic functionality is self-explanatory and the advanced features can’t be appreciated until you’ve used the software for a bit. We use SQL Server, too, so I wanted to breeze through SQL Server Management Studio. I had the same problem there–people who knew what I was talking about already knew what I was showing them, and people who didn’t know SQL were nodding along but probably not processing. So at the end of the day, that was somewhat of a bust.

The other sessions went better. When I covered object-oriented programming, I just tried to stress SOLID design principles. I also discussed the important of usability over reusability and quickly touched on code smells. I think the message was clear: SOLID + low odor = good. The introduction to unit testing was good, too. I was surprised to see that more than half of our interns had done unit testing as part of their school work. I’m glad to see this sneaking into college courses. The test-driven session that followed also went well, and it turned into a demonstration of mocks. None of the interns had seen mocks before, and I think they were generally excited with what they saw. The final session of the week was about agile. I shared my thoughts on what makes a good story and how the stories drive work from sprint to sprint. The agile stuff went okay–lots of nodding–but the conversation turned into more of a Q&A for me. The interns were interested in how I learned all that I know, and I was happy to share my experiences and insights that have helped me enjoy a successful career so far.

Overall, I’d say it was a pretty good week. There was still a bit too much time where the interns weren’t sure what they should be working on, but it was a more organized first week than we’ve had in the past. I’m satisfied with how it went, but I have plenty of ideas on how to make it better next year.

I’m interested to know what the first week for interns looks like at other companies. What sorts of activities do you plan? How do you introduce them to your product(s)? What do you do to get them up to speed and productive? And what do you do to keep it fun?

Guide Me, O Phone Gods

It’s been about a year and a half since I switched from Windows Phone 7 to Android. I was happy with Windows Phone, but I felt like I was missing out on a big part of the smartphone experience: the apps. WP7 was so new that there weren’t a lot of apps. The biggest and most popular apps generally came out for iOS first, followed by Android, and then, sometimes, they’d make their way into the Windows Phone store. I switched to Android, and I felt like I was joining the rest of the world in terms of apps.

In addition to the apps, it was the ability to “unlock” features like mobile hotspot by installing custom ROMs that drew me to Android. The free mobile hotspot is the main reason I’m considering sticking with Android, too. I know that other carriers give you free mobile hotspot with a metered data plan, but I’m sticking with Sprint’s unlimited data for the foreseeable future.

Upon making my switch, I had been running a very stable, very good Gingerbread ROM, and I ran it for over a year. It started to feel stale, and I upgraded to Jelly Bean. I love the updated look and feel of JB, but I’ve had unreliable GPS, poor battery life, and other assorted problems as I’ve hopped from ROM to ROM in search of stability. It’s a tough spot to be in. On one hand, I’m free to upgrade as quickly and frequently as I like. On the other hand, there are always defects, and the quality is ultimately at the mercy of the development community for my specific phone. My phone’s not getting any younger, either, so that community that I depend on is shrinking each day. Getting back to a stock ROM isn’t an option. The phone–a Galaxy SII–is too old, so there won’t be any updates coming from Sprint, and I can’t go back to Gingerbread or even Ice Cream Sandwich after getting a taste of Jelly Bean. And there’s no way I’m going to exchange my mobile hotspot for a bunch of Sprint bloat.

Windows Phone and iPhone are looking like better and better options. I’ve been really happy with my Surface, and I liked my Windows Phone 7. But will I again be dissatisfied with the amount of apps available to me? My wife has an iPhone, and it always seems to “just work.” There aren’t a lot of people that I know who don’t like their iPhones, but what if iPhone has peaked? Is joining in the post-Jobs era a bad move?

My friend that originally convinced me to move to Android tells me that I just need a new phone, and maybe that’s the case. And, to his credit, I’d be pretty happy if everything always worked on my Jelly Bean phone. If I stick with Android, I’ll probably keep it stock–I’m just not interested in keeping up with custom ROMs and the defects that come with them. I’m worried that I’ll be happy out of the gate but grow frustrated with the lack of updates over time.

I’ve still got a few more months before I’m eligible for a new phone, so I have time to sort it out. I’m confused, vulnerable, and directionless. Maybe I’ll just get a BlackBerry.

Wireless Connection Lost

One of the drawbacks of Surface RT is that you can’t install applications that aren’t in the Windows App Store. I primarily use my Surface at home and work, though, where I always have access to other PCs. So, when I need access to one of those non-Windows-Store apps, I just remote into a different workstation.

Using this technique, I ran into a rather frustrating issue while using Remote Desktop to connect to my laptop from my Surface at work. I usually keep the wireless adapter on my laptop enabled, which I don’t think is unusual or wrong. I also keep my laptop docked at work, meaning I have two active network connections. The problem I was running into was that my wireless connection would be lost upon connecting to my laptop using the laptop’s name (i.e., the host name resolved to the wireless adapter’s IP). This would typically result in a frozen Welcome screen, and I would be unable to reconnect.

It wasn’t a connectivity, firewall, or DNS issue, because Remote Desktop would resolve and connect right away. But then I’d be stuck, unable to reconnect except through the wired adapter’s IP address, which can’t be used since it’s dynamic. I thought I fixed this previously by connecting through the wired adapter’s IP to re-enable the wireless connection, but that appears to be only a temporary solution.

So, to recap the issue:

  • If I connected using the wired adapter’s IP address, I would have no problem.
  • If I connected using the wired adapter’s IP address, I would also see the wireless connection go away.
  • While connected through the wired adapter, I could reconnect the wireless connection, disconnect from the wired connection, reconnect through the wireless adapter, and everything would work.

Today, I think I found a good long-term solution. The secret for me was to save the credentials for the wireless connection. To do this, click the wireless connection from the Network and Sharing Center. When the properties window appears, click Advanced Settings on the Security tab. Then click Save Credentials in the advanced settings window.

WirelessProperties_SaveCredentials

Once I did that and saved my credentials, I no longer had issues connecting to Remote Desktop through the wireless adapter. Hooray!

Don’t Test Your Own Work?

I’ve been reading several discussions and articles on the topic of whether or not developers should test their own code, and I’m finding that the general consensus is, “No.” (Wait, it’s too early to stop reading! You must test!)

When talking about testing in this context, I’m referring to functional testing—not unit testing. There is absolute agreement in the development community that developers should write unit tests for the code they produce, and there is general agreement that functional testing by the authoring developer provides value, too. The argument that most of these discussions make against developers testing their own code is that the developer shouldn’t be responsible for putting the final stamp of approval on their output before it’s delivered to customers. This is very much in-line with my personal belief that one-developer projects are destined for failure, a problem that has been particularly prevalent on my development team.

I thought this was a decent analogy (taken from here):

Writing code is a bit like map-making.

You point your developers in the direction of some uncharted wasteland, supply them with coffee, and let them go hacking through the undergrowth. If you’re lucky, they’ll pop back up in a few weeks time, and they’ll have brought geometric exactitude to Terra Incognita. If you’re unlucky, they’ll go native and it’ll end up with heads-on-spikes and tears-before-bedtime.

Anyway: you want to test your new maps. The WORST people to test the maps are the people who wrote them. They know, without thinking about it, that you have to veer West when you reach The Swamp of Unicode Misery or you’ll drown. They know that the distances are slightly unreliable because they contracted malaria near the Caves of Ui and the resulting fever made the cartography kinda hazy.

In other words, when you develop a solution, you know how it was written to work, and you’ll have a tendency to test it that way. You know what the configuration settings are supposed to be, and you know when buttons are supposed to be pressed and when knobs should be turned. You are the one person in the entire universe for which your solution is most likely to work.

So that’s all good and well, but what can you do about it? It’s simple: get somebody else involved. Have a peer test your solution or demo the solution to others. These are great ways to find functional problems that you may not have considered. None of us is able to produce a perfect solution 100% of the time. It’s impossible to not make assumptions and not make mistakes, but getting others involved is a terrific way to overcome quality issues and oversights that can arise from those assumptions and mistakes.

Please feel free to comment on the subject. I’d love to hear what you think about this, particularly if you disagree.

An Unfortunate Tale of Data Lost but Mostly Recovered: Best Practices for Working on Production Databases

Last Friday, a co-worker came up to me and asked if there was a way to rollback an update because they accidentally updated all the records in a table in a customer’s live database.

Blurgh.

After consulting our DBA, we decided to restore a backup and update the values in the live database from the backup. But guess what? The customer’s maintenance plan was disabled and backups hadn’t been created in over a week.

Blurgh.

Well, at least we could still update the records using the week-old database. Before we did that, we figured we should probably create a backup of the database, though. So, I executed the backup maintenance plan, and it completed. With a “good” backup saved (successfully created but containing the erroneous update), we set out to restore the week-old backup, but it was gone. The maintenance plan was configured to delete backups more than one week old.

Blurgh!

No problem, our DBA has a tool that can extract data from the transaction logs to create an undo query. We just need the transaction logs, but–of course–those were also lost with the full backup.

BLURGH!

Luckily, the field that was lost was a timestamp, and we were able to reasonably reconstruct the values based on timestamps in related tables. That was our final option, and in the end it worked out okay. The field was of low importance to the customer, and they were more-or-less indifferent about the potential loss of data. If this had been a more important field, this could have been a catastrophic sequence of events, though. The worst part is that this entire data-scare was simply the result of unawareness and recklessness that could have been completely avoided by employing safer practices. And so, with that, I present to you my best practices for working with production databases.

Best Practices for Working on Production Databases

Certain data within a customer’s production database can be critical to their day-to-day operations, and there are few things less pleasant than telling them that you accidentally changed some of their data and are unable to get it back. Knowing how to work safely in a production database is the first step to avoiding finding yourself in that position. Mistakes do happen, though, so it’s a good idea to be prepared with multiple recovery options, should one of your precautions fail. Here is a list of best practices that you can employ to help ensure no critical data is lost while working with a production database.

Work in a Test Environment

The safest way to avoid production database mishaps is to not work with the customer’s production database. This may not be an option for what you’re trying to accomplish, but it is clearly the safest choice when viable.

Make Sure You Have a Database is Backup

Check the customer’s maintenance plan history to ensure that an up-to-date backup exists. This is your disaster recovery plan. If you find that the backup job has either been failing or not running at all, do not update the customer’s database. Work with the customer to create a successful backup before proceeding.

Backup the Table(s) You’re Working With

Also backup individual tables by using a simple SELECT-INTO statement. Append “_Backup” to the table name along with a date stamp to create a unique and descriptive name for your backup.

SELECT * INTO SomeTable_BackupYYYYMMDD FROM SomeTable

This backup will not have the same triggers and indexes, but it will provide you with the data you need in the event that records are unintentionally modified.

Be sure to clean up once you’re sure you no longer need the table backup.

DROP TABLE SomeTable_BackupYYYYMMDD

Backup the Stored Procedure(s) You’re Working With

If you’re working with stored procedures, you can back them up by saving a .sql script. To do this, locate the stored procedure in SQL Server Management Studio’s (SSMS) Object Explorer, right-click > Modify or Script Stored Procedure as > ALTER To > File …, and save the script to the desired location.

You can also get the current stored procedure by using the sp_helptext command. Note that the result of sp_helptext is subject to SSMS’s character limit.

sp_helptext 'someProcedure'

Copy the results to a new query, and save it to your backup location.

Be Aware of Table Triggers

If you are modifying records in a table that has data manipulation language (DML) triggers, there is the potential for the query to get stuck in a suspended state. If the process is then killed while suspended, it could put the transaction in a killed/rollback state for an extended period of time, which could hinder customer operations.

You can easily check for triggers on a table by using the sp_helptrigger command.

sp_helptrigger 'SomeTable'

In cases of multi-row updates on tables with DML triggers, it is best to consult a DBA if you are unsure what the impact will be.

Use Transactions

At the top of your query, start a new transaction by using the BEGIN TRAN command. Execute your queries and verify the results. If you wish to keep the results, complete the transaction by using the COMMIT command. If you wish to discard the results, use the ROLLBACK command.

Here is a suggested workflow for applying changes within a transaction:

  1. Begin transaction
  2. Query to establish how many rows should be affected
  3. Query to update/insert/delete
  4. Query to verify results
  5. Rollback transaction
  6. Once results have been verified in the transaction, modify #5 to commit the transaction

Note that tables will be locked while once they are modified in a transaction until the transaction is committed or rolled-back.