Intro to Unit Testing In Angular With Karma & Jasmine

Ah, unit testing. One of my favorite subjects. I’ve been big on unit tests for what seems like more than a decade now, but I’ve never dipped my toes into the UI unit test pool! It’s been long enough, though. It’s time to learn!

So, I’m taking my first look at writing tests for an Angular application using Karma and Jasmine. I thought this tutorial at scotch.io was a great first read. It provides overviews of what Karma and Jasmine are, and walks you through the steps of creating an Anuglar application and adding tests from scratch using the Angular CLI, and that’s basically the foundation I’m working with today.

In this article, I’m going to write some tests as I extend behavior on “an existing product” and also write tests for some untested code. I’ll be using my favorite boilerplate code–the .NET Core CLI Angular template–as my existing product, as it provides a functional appliction with some existing tests for us to build on.

If you haven’t already, start by creating a new Angular application by running the following command:

dotnet new angular -o my-app

Verify that can run your new application by running dotnet run in the my-app directory. my-app/ClientApp contains the Angular application. Since we’re going to be adding to existing test suite, we should verify that the existing tests run and pass. Run the following from my-app/ClientApp:

npm install
ng test

You should see output like the following, indicating that all tests have passed.

Now that we’ve verified that existing tests work, we can start making changes. Let’s assume we want to modify the Counter page to have a decrement button in addition to its increment button, because sometimes we click too many times. Before we start making changes to the Counter component itself, we can describe the desired behavior in the existing counter.component.spec.ts file, which already contains similar logic for the increment functionality. Most of what needs to be written can be copied and adjusted from the Increment test.

Here’s the test I wrote:

it ('should decrease the current count by 1 when Decrement is clicked', async(() => {
  const countElement = fixture.nativeElement.querySelector('strong');
  expect(countElement.textContent).toEqual('0');

  const decrementButton = fixture.nativeElement.querySelector('#decrement-button');
  decrementButton.click();
  fixture.detectChanges();
  expect(countElement.textContent).toEqual('-1');
}));

I can run tests again, and guess what–it’ll fail. Cool. So let’s make it work, which is just two steps. First we need to add the button to counter.component.html:

<h1>Counter</h1>

<p>This is a simple example of an Angular component.</p>

<p aria-live="polite">Current count: <strong>{{ currentCount }}</strong></p>

<button class="btn btn-primary" (click)="incrementCounter()">Increment</button>

<button class="btn btn-secondary" (click)="decrementCounter()" id="decrement-button">Decrement</button>

And then we need to add the logic for when it’s clicked in counter.component.ts:

import { Component } from '@angular/core';

@Component({
  selector: 'app-counter-component',
  templateUrl: './counter.component.html'
})
export class CounterComponent {
  public currentCount = 0;

  public incrementCounter() {
    this.currentCount++;
  }

  public decrementCounter() {
    this.currentCount--;
  }
}

That’s it. Now we can run ng test again and see that we now have 3 passing tests.

Good stuff. Now, let’s turn our attention to some untested code. FetchDataComponent is functional but has no tests for its display of data retrieved from the API. We’ll need to mock the API call to return some data and then assert that results are displayed as expected.

Here’s my fetch-data.component.spec.ts file with a test that mocks the API call and asserts that data is populated in the table on the UI:

import { async, ComponentFixture, TestBed } from '@angular/core/testing';
import { HttpClientTestingModule, HttpTestingController } from '@angular/common/http/testing';

import { FetchDataComponent } from './fetch-data.component';

describe('FetchDataComponent', () => {
    let component: FetchDataComponent;
    let fixture: ComponentFixture<FetchDataComponent>;
    let httpMock: HttpTestingController;

    beforeEach(async(() => {
        TestBed.configureTestingModule({
            imports: [HttpClientTestingModule],
            declarations: [FetchDataComponent],
            providers: [{ provide: 'BASE_URL', useValue: 'https://my.fake.url/' }]
        }).compileComponents();
    }));

    beforeEach(() => {
        fixture = TestBed.createComponent(FetchDataComponent);
        component = fixture.componentInstance;
        httpMock = TestBed.get(HttpTestingController);
    });

    it('should retrieve weather forecasts', async(() => {
        const dummyForecasts = [
            {
                date: "date1",
                temperatureC: 0,
                temperatureF: 32,
                summary: "summary1"
            },
            {
                date: "date2",
                temperatureC: 100,
                temperatureF: 212,
                summary: "summary2"
            }
        ];

        const req = httpMock.expectOne('https://my.fake.url/weatherforecast');
        req.flush(dummyForecasts);

        expect(component.forecasts).toBe(dummyForecasts);
        fixture.detectChanges();
        const rowCount = fixture.nativeElement.querySelectorAll('table tr').length;
        expect(rowCount).toBe(3); // header plus two data rows
    }));
});

There’s a lot to unpack with this test, and I’m going to attempt to give a visual, summary-level explanation in the screenshot below.

Once again, we run tests using ng test and see that we now have passing tests for FetchDataComponent!

Output Parameters in Argument Constraints with Rhino Mocks… Again!

One of my favorite things about being a blog owner and operator is when I’m looking for something and find that I’ve already answered my own question. This just happened to me as I was trying to re-figure out how to use output parameters with argument constraints with Rhino Mocks.

I found an article on this very topic written by me (thanks, self!), but as I was reading I noticed that I left something important out. So here’s a quick, mini do-over!

Let’s say we need to stub function Bar as it’s defined in the following interface:

public interface IFoo
{
    bool Bar(out string outty);
}

To use an argument constraint with the output argument, I need to do two things. First, I need to use Arg<T>.Out. The second thing–the thing that I failed to mention in my previous post–is that I need to use the Dummy property to make it compile.

Here’s what an example might look like. I stubbed the Bar function to return true and assign the value fake value! to the output argument.

IFoo mockFoo = MockRepository.GenerateMock<IFoo>();
mockFoo(x => x.Bar(out Arg<string>.Out("fake value!").Dummy)).Return(true);

Not too shabby!

Several Patterns for Refactoring Static Classes for Testability

Static classes and methods can be tricky business for unit tests, and I’ve had to do a lot of refactoring them lately to get some tests going. Here are a few different ways to make your static classes test-friendly.

For the examples below, assume we need to refactor the following class to make it more testable:

public static class Foo
{
    public static void Bar()
    {
        // do something
    }
}

Convert to Instance Class

A simple and straightforward solution is to convert your static class to an instance class. If you need to prevent multiple instances from existing, you can implement the singleton pattern in your class. It’s easy to do this, but it can be risky or tedious to implement across a solution depending on how much your static class is being used. For example, if your class is being accessed hundreds or thousands of times across many different projects, you may not want to do this because you’ll have to update each and every caller.

public class Foo : IFoo
{
    private static IFoo Instance { get; set; }
    
    static Foo()
    {
        Instance = new Foo();
    }
    
    private Foo()
    {
    }
    
    public void Bar()
    {
        // do something
    }
}

Keep Static Class, Extract Implementation #1

If you want or need to keep your static classes/methods as static without having to change every caller, you can extract the implementation of your static class to a new instance class and adjust the static class to use that. The implementation class uses the singleton pattern to expose a single static instance that will be used by the static class.

public static class Foo
{
    public static void Bar()
    {
        FooImplementation.Instance.Bar();
    }
}

internal class FooImplementation : IFoo
{
    public static IFoo Instance { get; private set; }
    
    static FooImplementation()
    {
        Instance = new FooImplementation();
    }
    
    private FooImplementation()
    {
    }
    
    public void Bar()
    {
        // do something
    }
}

public interface IFoo
{
    void Bar();
}

This works great, but it does require unit test authors to have knowledge of the new implementation class. This strikes me as slightly non-intuitive for developers since you go to class B to mock the behavior of class A. And that brings us to our next option…

Keep Static Class, Extract Implementation #2

Another option is to tweak the above strategy slightly so that the static property is added to the static class instead of on the implementation instance class. The nice thing about this approach is that you don’t need to have any awareness of the default implementation when providing the static class with an alternate implementation.

public static class Foo
{
    private static IFoo Implementation { get; set; }
    
    static Foo()
    {
        Implementation = new FooImplementation();
    }
    
    public static void Bar()
    {
        Implementation.Bar();
    }
}

internal class FooImplementation : IFoo
{
    public void Bar()
    {
        // do something
    }
}

public interface IFoo
{
    void Bar();
}

Create a Wrapper

Another option is to create a wrapper. Much like the first option presented, this approach has the downside of needing to update all callers to use the wrapper. However, if you are unable to modify the contents of your static class, this may be your best bet.

public static class Foo
{
    public staic void Bar()
    {
        // do something
    }
}

public class FooWrapper : IFoo
{
    private static IFoo Instance { get; set; }
    
    static FooWrapper()
    {
        Instance = new FooWrapper();
    }
    
    private FooWrapper()
    {
    }
    
    public void Bar()
    {
        Foo.Bar();
    }
}

interface IFoo
{
    void Bar();
}

Unit Test Sending Email with SmtpClient

I have a workflow activity that sends email (the code for this activity can be found here), and I wanted to write integration tests using SpecFlow. This creates an interesting problem. I don’t want to simply mock everything out, but I also don’t want to require a valid SMTP server and email addresses. I also want the test to pass or fail without having to check an email inbox.

Luckily, there are configuration options used by the SmtpClient class that can be used to create files when email messages are sent. This is accomplished by adding some simple code to your application configuration file. (Source here.)

<system.net>
    <mailSettings>
        <smtp deliveryMethod="SpecifiedPickupDirectory">
            <specifiedPickupDirectory pickupDirectoryLocation="C:\TempMail" />
        </smtp>
    </mailSettings>
</system.net>

This solution is easy and it works, but it creates another problem: I want my test to run automatically on other machines. I don’t want to hardcode a path into the config file because I could run into problems with user permissions or directory structure. I found this blog post that demonstrates how to change the directory programmatically. The only thing I didn’t like about that solution is that it requires the app.config change shown above. I modified the posted solution slightly so that the configuration file section is not needed. Here’s the result:

var path = GetTempPath();

// get mail configuration
var bindingFlags = BindingFlags.Static | BindingFlags.NonPublic;
var propertyInfo = typeof(SmtpClient)
    .GetProperty("MailConfiguration", bindingFlags);
var mailConfiguration = propertyInfo.GetValue(null, null);

// update smtp delivery method
bindingFlags = BindingFlags.Instance | BindingFlags.NonPublic;
propertyInfo = mailConfiguration.GetType()
    .GetProperty("Smtp", bindingFlags);
var smtp = propertyInfo.GetValue(mailConfiguration, null);
var fieldInfo = smtp.GetType()
    .GetField("deliveryMethod", bindingFlags);
fieldInfo.SetValue(smtp, SmtpDeliveryMethod.SpecifiedPickupDirectory);

// update pickup directory
propertyInfo = smtp.GetType()
    .GetProperty("SpecifiedPickupDirectory", bindingFlags);
var specifiedPickupDirectory = propertyInfo.GetValue(smtp, null);
fieldInfo = specifiedPickupDirectory.GetType()
    .GetField("pickupDirectoryLocation", bindingFlags);
fieldInfo.SetValue(specifiedPickupDirectory, path);

Using this code, I’m able to change the email delivery method and specify the output path programmatically. In my SpecFlow test, I create a temporary directory, process and verify email files created by my workflow, and cleanup. It works like a charm!

Testing Code Paths vs. Testing Behavior

I have a colleague that’s my equal in terms of unit testing enthusiasm, but we have very different philosophies. He tends to write methods first, then test the hell out of them to ensure that all code paths have been covered and that there are no holes. I tend to code using more of a TDD workflow, writing tests for each behavior that I expect from a method and not worrying about anything else that may or may not being going on.

Both approaches are valid. As we code, we both think about things that could go wrong with our code, and we both account for those things and make sure they’re tested. At the end of the day, we both end up with relatively bug free solutions that work well. Both methods produce high levels of code coverage. although focusing test writing on code paths will likely result is slightly higher coverage since the tests.

Yes, there’s a lot that’s similar about these two different approaches, but the differences are very important. The TDD mantra is “red, green, refactor.” The idea is that you write a failing test, add code to make the test pass, and then refactor the solution to clean up and optimize. This workflow is made for behavior-based testing. You expect a certain result from the method being tested. Once it’s producing that result, it shouldn’t stop producing it due to refactoring or optimizations.

The same statement can be made for tests written based on code paths: an expected result should continue to be produced after code is optimized. I’m venturing to say that optimizations are less likely to occur with the code-first approach, though. When you write code first, you don’t write tests until you’re done. And, since you’re writing tests based on the “finished” code, it’s less likely that you’ll discover flaws. Refactoring also seems less likely for the same reason. If refactoring does occur–which it should–then there’s a different problem: code paths that were once different may now be the same. You may have unknowingly written duplicate tests! (That’s not to say that the duplicate or redundant tests are bad, but you’ll have spent time writing code that, in the end, didn’t need to be written.)

Every developer I’ve ever met has learned to code before they’ve learned to write unit tests. Unit tests are generally written in code, so it’s hard to imagine learning them in any other order. Because we learn these two things in that order, we generally learn to write unit tests by following code paths. If you’re one of those code-first-and-write-tests-later types, I urge you to step out of your comfort zone and start writing behavior-based tests FIRST. You’ll code with purpose and write meaningful tests. You’ll be able to refactor with confidence, knowing that your code’s behavior has been unaffected by your chances. Like any skill, it takes some time to get used to, but I strongly believe you’ll produce higher quality code more efficiently once you become proficient.

Write Tests First–But Not ALL Tests First

I’ve been preaching hard about test-driven development and the importance of writing tests first. I can feel the culture beginning to shift as people are slowly starting to buy in, but I had an interesting discovery yesterday.

I was invited to a meeting by some developers that wanted me to walk through how I would’ve used test-driven development to write tests and develop a new project that they had recently started. It was essentially a data validation project that retrieved data from the database, checked it against a set of rules, and recorded any violations. We were reviewing a Controller class that was responsible for orchestrating the operation.

“Okay,” I said, “What is this thing supposed to do?”

The developers told me it retrieves records, validates each record, and saves the validation results. So, without knowing anything more than that, I figured there were at least two external dependencies: an IDataAccess responsible for retrieving records and saving the result and an IValidator that does the data validation/rule-checking. I drew a diagram on the whiteboard to show the relationships between the Controller and these two components.

I explained that since we know the dependencies and how we expect them to be used, we can begin to write tests. We also need to know how our application should react when the dependencies are missing. I started to rattle off some tests:

  • ProcessRecords_NullDataAccess_ThrowsArgumentNullException
  • ProcessRecords_NullValidator_ThrowsArgumentNullException
  • ProcessRecords_DataAccessReturnsNonEmptyList_ValidatesEachRecord
  • Etc.

The group was with me, but they quickly shifted focus to what tests were needed for the DataAccess class. And the tests for its dependencies. And everything else.

“Whoa, whoa, WHOA. None of that matters for this. All we care about is this method,” I say.

“Well, yes, but we want to do test-driven development. We thought the goal was to have all of our tests written first so we can go back and implement them.”

That’s when I had my epiphany. When I’m telling people to write tests first, they think I mean write ALL tests first. This is not the case! It would be virtually impossible to think about every code decision and execution path for an entire method/class/application upfront, and I think that’s where there’s been a disconnect. I can look at the finished code and come up with all the tests, but there is no way I could’ve come up with every single test for every single method before ever writing any code.

I went to another small team of developers and asked them if they also thought I meant “all tests first.” They did. It’s disappointing to know that I was sending the wrong message, but I’m glad I have something to address that will hopefully result in more passengers on the TDD train.

When you’re getting started with test-driven development, don’t try to write every single test first. Don’t even try to write as many tests as you can think of. You just want to write tests as you go. What does this method need to do next? Write a failing test, then write the implementation to make it pass. Red, green, refactor, baby! I’m also exchanging my “tests first” mantra for a new one: “test as you go!”

Don’t Test Your Own Work?

I’ve been reading several discussions and articles on the topic of whether or not developers should test their own code, and I’m finding that the general consensus is, “No.” (Wait, it’s too early to stop reading! You must test!)

When talking about testing in this context, I’m referring to functional testing—not unit testing. There is absolute agreement in the development community that developers should write unit tests for the code they produce, and there is general agreement that functional testing by the authoring developer provides value, too. The argument that most of these discussions make against developers testing their own code is that the developer shouldn’t be responsible for putting the final stamp of approval on their output before it’s delivered to customers. This is very much in-line with my personal belief that one-developer projects are destined for failure, a problem that has been particularly prevalent on my development team.

I thought this was a decent analogy (taken from here):

Writing code is a bit like map-making.

You point your developers in the direction of some uncharted wasteland, supply them with coffee, and let them go hacking through the undergrowth. If you’re lucky, they’ll pop back up in a few weeks time, and they’ll have brought geometric exactitude to Terra Incognita. If you’re unlucky, they’ll go native and it’ll end up with heads-on-spikes and tears-before-bedtime.

Anyway: you want to test your new maps. The WORST people to test the maps are the people who wrote them. They know, without thinking about it, that you have to veer West when you reach The Swamp of Unicode Misery or you’ll drown. They know that the distances are slightly unreliable because they contracted malaria near the Caves of Ui and the resulting fever made the cartography kinda hazy.

In other words, when you develop a solution, you know how it was written to work, and you’ll have a tendency to test it that way. You know what the configuration settings are supposed to be, and you know when buttons are supposed to be pressed and when knobs should be turned. You are the one person in the entire universe for which your solution is most likely to work.

So that’s all good and well, but what can you do about it? It’s simple: get somebody else involved. Have a peer test your solution or demo the solution to others. These are great ways to find functional problems that you may not have considered. None of us is able to produce a perfect solution 100% of the time. It’s impossible to not make assumptions and not make mistakes, but getting others involved is a terrific way to overcome quality issues and oversights that can arise from those assumptions and mistakes.

Please feel free to comment on the subject. I’d love to hear what you think about this, particularly if you disagree.

Definition of a Good Unit Test

I’m constantly encouraging developers around me to embrace test-driven development and automated unit testing. However, there’s one very important thing I’ve neglected to include in my evangelizing: a definition for what constitutes a good unit test.

A big problem that I think a lot of developers run into is that they’re writing tests but aren’t realizing any value from them. Development takes longer because they have to write tests, and when requirements change it takes longer because they have to refactor tests. Maintenance and warranty work is slower, too, because of the additional upkeep from failing tests created with every little change.

These problems mostly exist because of bad unit tests that are written due to insufficient knowledge of what makes a good test. Here’s a list of properties for a good unit test, taken from The Art of Unit Testing:

  • Able to be fully automated
  • Has full control over all the pieces running (Use mocks or stubs to achieve this isolation when needed)
  • Can be run in any order  if part of many other tests
  • Runs in memory (no DB or File access, for example)
  • Consistently returns the same result (You always run the same test, so no random numbers, for example. save those for integration or range tests)
  • Runs fast
  • Tests a single logical concept in the system
  • Readable
  • Maintainable
  • Trustworthy (when you see its result, you don’t need to debug the code just to be sure)

This is a great list that you can use to gut-check your unit tests. If a test meets all of these criteria, you’re probably in good shape. If you’re violating some of these, refactoring is probably required in your test or in your design, particularly in the cases of Has full control over all the pieces running, Runs in memory, and Tests a single logical concept in the system.

When I see developers struggling with unit tests and test-driven development, it’s usually due to test “backfilling” on a poorly-designed or too-complex method. They don’t see value because, in their minds, the work is already done and they’re having to do additional work just to get it unit tested. These methods and objects are violating the single responsibility principle and sometimes have many different code paths and dependencies. That complexity makes writing tests hard because you have to do so much work upfront in terms of mocking and state preparation, and it’s really difficult to cover each and every code path. It also makes for fragile tests because testing specific parts of a method rely on a test’s ability so successfully navigate its way through the logic in the complex method; if an earlier part of the method changes, you now have to refactor unrelated tests to re-route them back to the code they’re testing. (Tip: Avoid problems like this by writing tests first!)

Whether you’re just getting with unit tests or a grizzled veteran, you can benefit from using this list of criteria as a quality measuring stick for the tests you write. High-quality tests that verify implemented features will result in a stable application. Designs will become better and more maintainable because you’ll be able to modify specific functionality without affecting the surround system the same way that you’re able to test it. You won’t have to worry about other developers breaking functionality you’ve added, and you won’t have to worry about breaking functionality they’ve added. You’ll be able to make a modification that affects the entire system with a high-level of confidence. I’d venture to say that virtually every aspect of software development is better when you’ve got good tests.

Writing Tests First

I think it’s safe to say that everybody agrees on the value and necessity of automated unit tests. Writing code without tests is the fastest way to accumulate technical debt. This is one reason why test-driven development (TDD) has become an accepted and preferred technique for developing high-quality, maintainable software. When asked to describe TDD, a typical response might include the red, green, refactor pattern. The idea is that you write a failing test first (red), make it pass (green), and then clean up after yourself (refactor). Easy, right?

Not so fast, my friend.

It breaks my heart to say it, but I think most of the developers on my team don’t write tests first. They follow more of a null, refactor, green pattern. Code is written first. It’s assumed to be pretty good but not verified. Then refactoring occurs to make the code testable as unit tests are written to test the code. I have a few problems with this.

First and foremost, by doing tests last, you’re making it possible to be “done” without proper tests. You should never have somebody say something like, “It’s done, but I didn’t have time to write all the tests.” I’ve seen this happen too many times, and it’s completely unacceptable. You aren’t done until you have tests, and the best way to ensure you have tests is to write them first. Period.

By not writing tests first, you’re also opening yourself to the possibility of writing untestable code. Regardless of whether or not you write tests before or after, you’ll have to figure out how to test the code you are or will be writing. If you write the tests after code, you might find yourself doing some heavy refactoring and restructuring to make the code testable–a step that could have been avoided completely by simply writing the tests first. Conversely, writing tests first empowers you to make smart design decisions that ensure testability before you’ve implemented your solution. Writing tests first forces you to write testable code, and testable code is typically more well-designed.

Tests-first help from a requirements perspective, too. You’re obviously trying to accomplish something by writing code, so what is it? Write failing or inconclusive tests to represent your requirements, and you’ll be less likely to accidentally miss a requirement in the heat of your fervorous coding. If you don’t know the requirements, you probably shouldn’t be coding anything. That should be a huge red flag. Stop coding immediately, and go figure out the requirements!

I don’t think I’ve presented anything terribly controversial here, so why aren’t people writing tests first? In my opinion, it’s primarily a mental block. You have to commit to it. Executing can be tricky–especially in the beginning–but making the commitment to write tests first and sticking to it is really the hard part. Execution will come with time and practice. But you still need to start somewhere, right? So, I’ve whipped up a simple workflow to help get you started. Note that a lot of this is situational, but the basic idea is to start with stubs and do only enough of the actual implementation to complete your test.

  1. Write a test method with a descriptive name.
    1. If I’m going to be writing a Save method, my first test might be named SaveSavesTheRecordToTheDatabase or just SaveSaves.
    2. Implement the test with a single line of code: Assert.Fail() or Assert.Inconclusive().
    3. Note that the Save method might not exist at this point! I’m just spelling out what I want to accomplish by writing test methods.
    4. Repeat for each requirement.
  2. Begin implementing the test.
    1. This will be different depending on the nature of what’s being tested. It might be defining an expected output for a given input. For my example Save method that will save the record to a database, the first thing I might do is create a mock IDatabase and write an assertion to verify that it was used to save the record.
    2. Go as far as you can without implementing your actual change.
    3. Leave your Assert.Fail() or Assert.Inconclusive() at the end of the test if there’s more work to be done.
  3. Begin implementing your changes.
    1. The goal should be to make the test code that you’ve written so far pass or to free a roadblock that prevents you from writing more tests. How will my Save method get access to an IDatabase? Will it be a method argument? Passed to the constructor? I can address these implementation details without implementing the Save method.
    2. Identify and write more tests as you go! For example, did you add a new method argument that will never be null? Add a test to verify that an ArgumentNullException is thrown if it is. Are you making an external service call? Write a test to verify that exceptions will be handled. Write the new tests before you implement the behavior that addresses them!
    3. Go as far as you need to complete your test code.
    4. Use stub methods that throw NotImplementedException if you need a new method that doesn’t have tests.
  4. Finish implementing the test.
    1. Add mocks, add assertions, and implement the rest of your test.
  5. Finish implementing your changes.
    1. Now, with a complete test, finish the rest of your implementation.
  6. Review tests and changes.
    1. Add additional test cases to verify alternate scenarios.
    2. Look for edge cases and ensure they’re covered. (Null checks, exception handling, etc.)
  7. Refactor!
    1. Don’t forget to cleanup after yourself!
    2. Now that you have a solid set of tests, you can refactor your code to make it shiny and clean. The tests will ensure that you don’t lose functionality or break requirements.

This isn’t a perfect workflow, and it won’t work for all scenarios. It’s just meant to help get you over the I-can’t-write-a-test-for-this-yet hump. Remember that writing tests first is a decision you make, above all else. Commit to write tests first. Fight the urge to implement first and test later, and stop coding if you find yourself implementing sans tests. After some time, it will become habit. You’ll learn how to deal with the tricky stuff.

Do you have similar experiences with getting teams to adopt TDD? I’d love to hear about them! Please share your thoughts and ideas related to the topic. What are some things that you’ve done or had done to you to improve adoption of TDD and actually writing tests first?

Compare Objects from Tables in SpecFlow

Earlier this week, I wrote about how to create and populate objects from tables created in your SpecFlow scenarios. If the only reason you’re populating an object is to do a comparison, there’s an easier way: the CompareToInstance<T> method.

Similar to the previously discussed method, CreateInstance<T>, an object will be created using the properties contained in a table of values. The object properties will be matched ignoring case and whitespace. CompareToInstance allows you to provide an instance of the same type and does a property-by-property comparison. If any of the property values are different, a TechTalk.SpecFlow.Assist.ComparisonException is thrown with a message containing all differences that were found.

Here’s an example!

Scenario: Update a person name
	Given I have an existing person
	And I change the name to the following
		| first | middle | last  |
		| Ftest | Mtest  | Ltest |
	When I save the person
	And I retrieve person
	Then the name was changed to the following
		| first | middle | last  |
		| Ftest | Mtest  | Ltest |
// don't forget!
// using TechTalk.SpecFlow.Assist;

[Given(@"I change the name to the following")]
public void GivenIChangeTheNameToTheFollowing(Table table)
{
    var p = ScenarioContext.Current.Get<Person>();
    p.Name = table.CreateInstance<NameType>();
}

[Then(@"the name was changed to the following")]
public void ThenTheNameWasChangedToTheFollowing(Table table)
{
    var p = ScenarioContext.Current.Get<Person>();
    table.CompareToInstance<NameType>(p.Name);
}