Custom Configuration Sections

So you want to create some custom config sections, but you don’t want to mess with custom classes that inherit from ConfigurationSection and ConfigurationElement? You just want some logical groups of key-value pairs? Well, friend, I’ve got some good news for you: it’s super easy!

To create a simple collection of key-value pair settings, you can use the NameValueSectionHandler class. Just add a section to your configuration file’s configSections, add your settings, and you’re good to go! (Note that your project will need a reference to System.Configuration.)

Here’s a sample config:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="Alabama" type="System.Configuration.NameValueSectionHandler" />
    <section name="Georgia" type="System.Configuration.NameValueSectionHandler" />
  </configSections>
  <Alabama>
    <add key="state" value="ALABAMA!" />
  </Alabama>
  <Georgia>
    <add key="state" value="GEORGIA!" />
  </Georgia>
</configuration>

And here’s the sample code to access the custom sections and settings:

using System;
using System.Collections.Specialized;
using System.Configuration;

namespace MultipleConfigSections
{
    class Program
    {
        static void Main(string[] args)
        {
            var alSection = ConfigurationManager.GetSection("Alabama") as NameValueCollection;
            Console.WriteLine(alSection["state"]);

            var gaSection = ConfigurationManager.GetSection("Georgia") as NameValueCollection;
            Console.WriteLine(gaSection["state"]);

            Console.WriteLine();
        }
    }
}

If you’re looking for something more sophisticated, you’ll probably want to check out this MSDN article for a quick example that uses ConfigurationSection and ConfigurationElement. Need a collection? You’ll probably need ConfigurationElementCollection, too.

Property Behavior in Rhino Mocks and Stubs

A co-worker presented me with a scenario where assigning and accessing a property from a Rhino Mocks stub worked as expected, but changing the stub to a dynamic mock caused the test to fail. This didn’t really seem like a stub-versus-mock question since stubs and dynamic mocks generally function similarly, with the difference being that mocks allow you to verify expectations whereas stubs only allow you to stub return values.

Here’s a simplified version of the problem:

[TestMethod]
public void TestWithStub()
{
    var sampleStub = MockRepository.GenerateStub<ISample>();
    sampleStub.Value = "bar";

    // success!
    Assert.AreEqual("bar", sampleStub.Value);
}

[TestMethod]
public void TestWithMock()
{
    var sampleMock = MockRepository.GenerateMock<ISample>();
    sampleMock.Value = "foo";

    // fail!
    Assert.AreEqual("foo", sampleMock.Value);
}

I’ve seen guidance online that suggests this is the correct way to handle stubbed properties. This is primarily what makes the issue confusing to me. However, per the Rhino Mocks 3.3 Quick Reference Guide, the correct way to handle property getters and setters for mocks is to use Expect.Call(foo.Name).Return(“Bob”) and Expect.Call(foo.Name = “Bob”), respectively. The quick reference guide also identifies a way to do “automatic properties” by using the PropertyBehavior method. The PropertyBehavior method allows properties in mocks to function like they do in stubs. The behavior is enabled on stubs be default, which is what causes the out-of-the-box behaviors to be different.

By adding a call to PropertyBehavior, the seemingly-correct sample test above succeeds.

[TestMethod]
public void TestWithMock()
{
    var sampleMock = MockRepository.GenerateMock<ISample>();
    sampleMock.Expect(x => x.Value).PropertyBehavior();
    sampleMock.Value = "foo";

    // success!
    Assert.AreEqual("foo", sampleMock.Value);
}

Note that although we use an Expect method, there is no expectation set when using the PropertyBehavior method. This means that the test will pass even if the property is not used at all. If you need to verify getters and setters, you should use mock.Expect(x => x.Value).Return(“foo”) and mock.Expect(x => x.Value = “bar”).

Renumber Enums with Regular Expressions

We had a widely-used assembly with an enumeration that did not have explicitly assigned values that was being released from multiple branches and causing problems. In an effort to keep the enumerations synchronized across projects, explicit values were added. The problem is that the values started at 1, whereas the implicit counter starts at 0. The solution is simple: renumber ’em to start at 0. Sounds like a job for regular expressions!

I was really hoping that I could do this using regular expressions in VS2012’s find & replace, but I just couldn’t find a way to implement the necessary arithmetic. After floundering for 15 minutes or so, I decided to just write a simple script in LINQPad. Here’s what I came up with, and it works fantastically.

var filename = @&quot;C:\source\MehType.cs&quot;;

var contents = string.Empty;
using (var fs = new FileStream(filename, FileMode.Open))
{
    using (var sr = new StreamReader(fs))
    {
        contents = sr.ReadToEnd();
    }
}

var regex = new Regex(@&quot;(.*?= )(\d+)&quot;);
foreach (Match match in regex.Matches(contents))
{
    var num = int.Parse(match.Groups[2].Value);
    contents = contents.Replace(
        match.Value, match.Result(&quot;${1}&quot; + --num));
}

using (var fs = new FileStream(filename, FileMode.Create))
{
    using (var sw = new StreamWriter(fs))
    {
        sw.Write(contents);
        sw.Flush();
    }
}

The result is that this…

public enum MehType
{
    Erhmm = 1,
    Glurgh = 2,
    Mfhh = 3
}

…becomes this…

public enum MehType
{
    Erhmm = 0,
    Glurgh = 1,
    Mfhh = 2
}

Naming and Capitalization Conventions

CodeSmellz_001_Capitalization

A small group of us were doing a code review, and the topic of capitalization and naming conventions came up. I’m very rigid in my ways, and super anal about making sure everything is correct and consistent. Unfortunately, at work we don’t have any official standards documentation to define the style we use. There are some general patterns that are followed, but some of the more controversial topics–like use of var–are left to the preference of the developer. I knew that MSDN had published naming guidelines, so I dug them up to present to the group. Now when we review code, we can nitpick names and capitalization using the argument that “our standard is to follow Microsoft’s guidance.” That’s good with me!

The specific topic that got us started down this path was capitalization for acronyms, and Microsoft offers three rules for dealing with them:

  1. Do capitalize both characters of two-character acronyms, except the first word of a camel-cased identifier. (e.g., DBRate, ioChannel)
  2. Do capitalize only the first character of acronyms with three or more characters, except the first word of a camel-cased identifier. (e.g., XmlWriter, htmlReader)
  3. Do not capitalize any of the characters of any acronyms, whatever their length, at the beginning of a camel-cased identifier. (e.g., xmlStream, dbServerName)

Ahh, just how I like it.

An additional distinction is made for abbreviations. It advises that, generally, abbreviations should not be used in library names. Two exceptions are noted, though: ID and OK. These are acceptable to use in an identifier name and should follow the same casing rules as regular words. In other words, Id and Ok for Pascal-case and id and ok for camel-case.

While we’re on the topic of capitalization, here’s a list of conventions that I follow in terms of Pascal-case versus camel-case. Violating these rules is a good way to irritate me

  • Class: Pascal
  • Property: Pascal
  • Parameter: Camel
  • Private instance field: Camel, prefixed with “_” (no official guidance; I’m flexible on this one)
  • Public/internal/protected instance field: N/A (use Property instead; read more)
  • Event: Pascal
  • Local variable: Camel
  • Enum types and values: Pascal

What to read more? (I know; I love this stuff, too!) Check out the complete guidance here.

Performance Profiling for Unit Tests

When I first got my hands on Visual Studio 2012, I was trying out every feature I could, new and old alike. I love the new Test Explorer. Code coverage is SO much better than it was before. Code analysis for code clones is a terrific, too. The only thing I wasn’t happy about was performance analysis.

The reason I wasn’t happy with performance analysis is that I couldn’t use it on unit tests. Luckily, Microsoft fixed that with the release of Visual Studio 2012 Update 1. Now you can right-click a test and choose Profile Test to run performance analysis on a single unit test, and it is awesome!

ProfileTest_ContextMenu

When you choose to profile a test, the test runs as usual followed by the analysis. When analysis completes, you’re presented with a summary view that shows you CPU usage over time, the “hot path” call tree—a tree of the most active function calls where most of the work was performed—and a list of functions responsible for doing the most individual work.

ProfileTest_SummaryView

You can find out more and investigate resource spikes by selecting a time range on the graph and filtering. That’s all good and well, but what really blew me away was that you can click on the functions with the most individual work to drill into them. Drill into them? Yea—you’re taken to a view that shows you the selected function, its callers, and the methods it calls. There are percentages that show how much time was taken in each of the three areas (callers, current method, and callees), and you can click into any of the methods displayed to navigate up or down the call stack. The actual code for the current method is also displayed. The only thing that seemed sub-optimal is that I couldn’t edit the code directly; there’s a link to the actual file, though, so you’re only a click away from the editable code file.

ProfileTest_FunctionDetails

There are other, sortable views you can look at, too. You can view a call tree or breakdown by module, and you can get to the same function details view described above from each of those views. It’s a really useful, powerful experience.

Here’s where it gets really nuts, though: add SpecFlow to the mix. SpecFlow lets you write feature-based scenarios that are capable of serving as automated integration tests. The scenarios run like normal unit tests. You can right-click them in the Test Explorer to run performance analysis on them. This means that you can do targeted performance analysis on specific features of your application! To test this out, I sorted my unit tests by execution duration and analyzed the slowest. I was able to find a bottleneck with a caching algorithm used by nearly all of the other modules in the application. Execution time of the 350 unit tests in the project went from 50 seconds to 20. That’s a HUGE improvement from fixing one flaw found from running analysis on one function picked only because it was the most time-consuming in the slowest test.

Good tests are supposed to run quickly, and performance analysis is an invaluable tool to help you triage valuable-but-poor-performing tests. Also, since you’ve got automated tests, you can refactor and optimize the performance of your application with high confidence. If you haven’t used performance analysis before—more specifically, performance analysis for unit tests—give it a shot; I’d be blown away if you didn’t find incredible value.

Post to Twitter with DotNetOpenAuth

DotNetOpenAuth_TwitterAPI

A few weeks ago, I started looking into using the Twitter API for automatic, event-based status updates in an application. I wanted to understand what was going on, so I didn’t want to simply download Twitterizer or LINQ to Twitter. Learning OAuth has been a challenge, to put it lightly. I’ve learned a lot, but I’m still pretty clumsy with it. Today, I found out about a new open source project that seems like just what I needed: DotNetOpenAuth.

Using DotNetOpenAuth, I was able to create a functional console application that posts to Twitter in about 200 lines of code. This post will walk you through the steps.

The first thing you need to do is create a new Twitter application. Go to dev.twitter.com, sign in (or sign up), and create the application. If you want to post status updates with your application—like we’re doing here—be sure to click to the Settings tab and change the Application Type to Read and Write.

Once you’ve created your application with Twitter, it’s time to create your project in Visual Studio. I’ll be using a console application in an effort to keep it simple. The first thing you’ll want to do after creating the project is install the DotNetOpenAuth NuGet package. (Not sure how to use NuGet? Start here!)

Now it’s time to get down to business. We’re going to start by creating a token manager. Most of the tutorials online seem to use a simple, in-memory token manager, and I’m going to follow suit. In a real application, you’ll want to store the access tokens and access token secrets so that you don’t have to authorize each time the application runs.

namespace adamprescott.net.TweetConsole
{
    using DotNetOpenAuth.OAuth.ChannelElements;
    using DotNetOpenAuth.OAuth.Messages;
    using DotNetOpenAuth.OpenId.Extensions.OAuth;
    using System;
    using System.Collections.Generic;

    public class TokenManager : IConsumerTokenManager
    {
        private static Dictionary<string, string> TokenSecrets = 
            new Dictionary<string, string>();

        public TokenManager(string consumerKey, string consumerSecret)
        {
            ConsumerKey = consumerKey;
            ConsumerSecret = consumerSecret;
        }

        public string ConsumerKey { get; private set; }

        public string ConsumerSecret { get; private set; }

        public string GetTokenSecret(string token)
        {
            return TokenSecrets[token];
        }

        public void StoreNewRequestToken(UnauthorizedTokenRequest request,
            ITokenSecretContainingMessage response)
        {
            TokenSecrets[response.Token] = response.TokenSecret;
        }

        public void ExpireRequestTokenAndStoreNewAccessToken(
            string consumerKey,
            string requestToken,
            string accessToken,
            string accessTokenSecret)
        {
            TokenSecrets.Remove(requestToken);
            TokenSecrets[accessToken] = accessTokenSecret;
        }

        public TokenType GetTokenType(string token)
        {
            throw new NotImplementedException();
        }

        public void StoreOpenIdAuthorizedRequestToken(string consumerKey,
            AuthorizationApprovedResponse authorization)
        {
            TokenSecrets[authorization.RequestToken] = String.Empty;
        }
    }
}

The next thing we need is a consumer wrapper. This wrapper is where we’ll specify the OAuth token URLs and expose three methods that we’ll use from our main application: BeginAuth, CompleteAuth, and PrepareAuthorizedRequest.

namespace adamprescott.net.TweetConsole
{
    using DotNetOpenAuth.Messaging;
    using DotNetOpenAuth.OAuth;
    using DotNetOpenAuth.OAuth.ChannelElements;
    using System.Collections.Generic;
    using System.Net;

    public class TwitterConsumer
    {
        private string _requestToken = string.Empty;

        public DesktopConsumer Consumer { get; set; }
        public string ConsumerKey { get; set; }
        public string ConsumerSecret { get; set; }

        public TwitterConsumer(string consumerKey, string consumerSecret)
        {
            ConsumerKey = consumerKey;
            ConsumerSecret = consumerSecret;

            var providerDescription = new ServiceProviderDescription
            {
                RequestTokenEndpoint = new MessageReceivingEndpoint(
                    "https://api.twitter.com/oauth/request_token",
                    HttpDeliveryMethods.PostRequest),
                UserAuthorizationEndpoint = new MessageReceivingEndpoint(
                    "https://api.twitter.com/oauth/authorize",
                    HttpDeliveryMethods.GetRequest),
                AccessTokenEndpoint = new MessageReceivingEndpoint(
                    "https://api.twitter.com/oauth/access_token", 
                    HttpDeliveryMethods.GetRequest),
                TamperProtectionElements = new ITamperProtectionChannelBindingElement[] 
                {
                    new HmacSha1SigningBindingElement()
                }
            };

            Consumer = new DesktopConsumer(
                providerDescription,
                new TokenManager(ConsumerKey, ConsumerSecret));
            return;
        }

        public string BeginAuth()
        {
            var requestArgs = new Dictionary<string, string>();
            return Consumer
                .RequestUserAuthorization(requestArgs, null, out _requestToken)
                .AbsoluteUri;
        }

        public string CompleteAuth(string verifier)
        {
            var response = Consumer.ProcessUserAuthorization(
                _requestToken, verifier);
            return response.AccessToken;
        }

        public HttpWebRequest PrepareAuthorizedRequest(
            MessageReceivingEndpoint endpoint,
            string accessToken, 
            IEnumerable<MultipartPostPart> parts)
        {
            return Consumer.PrepareAuthorizedRequest(endpoint, accessToken, parts);
        }

        public IConsumerTokenManager TokenManager
        {
            get
            {
                return Consumer.TokenManager;
            }
        }
    }
}

All that’s left to do now is put it all together. The main application needs your Twitter application’s consumer key and consumer secret. (Both of those values can be found on the Details tab of the Twitter application.) Those values are passed to the consumer wrapper which can then produce an authorization URL. We’ll prompt the user for credentials by opening the URL in a web browser. The authorization process will be completed when the user enters their PIN from Twitter into the console application. Once authorized, the application can post to Twitter on behalf of the user. I added a simple loop that prompts the user and tweets their input.

namespace adamprescott.net.TweetConsole
{
    using DotNetOpenAuth.Messaging;
    using System;
    using System.Diagnostics;

    class Program
    {
        const string _consumerKey = "~consumerkey~";
        const string _consumerSecret = "~consumersecret~";
        private TwitterConsumer _twitter;

        static void Main(string[] args)
        {
            var p = new Program();
            p.Run();
        }

        public Program()
        {
            _twitter = new TwitterConsumer(_consumerKey, _consumerSecret);
        }

        void Run()
        {
            var url = _twitter.BeginAuth();
            Process.Start(url);
            Console.Write("Enter PIN: ");
            var pin = Console.ReadLine();
            var accessToken = _twitter.CompleteAuth(pin);

            while (true)
            {
                Console.Write("Tweet ('x' to exit) /> ");
                var tweet = Console.ReadLine();
                if (string.Equals("x", tweet, StringComparison.CurrentCultureIgnoreCase))
                {
                    break;
                }
                Tweet(accessToken, tweet);
            }
        }

        void Tweet(string accessToken, string message)
        {
            var endpoint = new MessageReceivingEndpoint(
                "https://api.twitter.com/1.1/statuses/update.json",
                HttpDeliveryMethods.PostRequest | HttpDeliveryMethods.AuthorizationHeaderRequest);

            var parts = new[]
            {
                MultipartPostPart.CreateFormPart("status", message)
            };

            var request = _twitter.PrepareAuthorizedRequest(endpoint, accessToken, parts);

            var response = request.GetResponse();
        }
    }
}

The full source code for this sample is available on GitHub. Note that you’ll need to provide your application’s consumer key and secret in order to make the sample functional.

Collection Lookups

FindInCollection

Yesterday, I was discussing a method with a co-worker where I suggested we loop through a collection of records and, for each record, do another retrieval-by-ID via LINQ. He brought up that this would probably be done more efficiently by creating a dictionary before the loop and retrieving from the dictionary instead of repeatedly executing the LINQ query. So I decided to do some research.

Firstly, I learned about two new LINQ methods: ToDictionary and ToLookup. Lookups and dictionaries serve a similar purpose, but the primary distinction is that a lookup will allow duplicate keys. Check out this article for a quick comparison of the two structures.

With my new tools in hand, I wanted to compare the performance. I first came up with a test. I created a collection of simple objects that had an ID and then looped through and retrieved each item by ID. Here’s what the test looks like:

void Main()
{
	var iterations = 10000;
	var list = new List<Human>();
	for (int i = 0; i < iterations; i++)
	{
		list.Add(new Human(i));
	}
	
	var timesToAvg = 100;
	
	Console.WriteLine("Avg of .Where search: {0} ms", 
		AverageIt((l, i) => TestWhere(l, i), list, iterations, timesToAvg));
	
	Console.WriteLine("Avg of for-built Dictionary search: {0} ms", 
		AverageIt((l, i) => TestDictionary(l, i), list, iterations, timesToAvg));
		
	Console.WriteLine("Avg of LINQ-built Dictionary search: {0} ms", 
		AverageIt((l, i) => TestToDictionary(l, i), list, iterations, timesToAvg));
		
	Console.WriteLine("Avg of Lookup search: {0} ms", 
		AverageIt((l, i) => TestLookup(l, i), list, iterations, timesToAvg));
}

decimal AverageIt(Action<List<Human>, int> action, List<Human> list, int iterations, int timesToAvg)
{
	var sw = new Stopwatch();
	
	decimal sum = 0;
	for (int i = 0; i < timesToAvg; i++)
	{
		sw.Reset();
		sw.Start();
		action(list, iterations);
		sw.Stop();
		sum += sw.ElapsedMilliseconds;
	}
	return sum / timesToAvg;
}

class Human
{
	public int id;
	
	public Human(int id)
	{
		this.id = id;
	}
}

Then, I wrote a method for each algorithm I wanted to test: using .Where, using a manually-built dictionary, using a ToDictionary-built dictionary, and using a lookup. Here are the methods I wrote for each of the algorithms:

void TestWhere(List<Human> list, int iterations)
{	
	for (int i = 0; i < iterations; i++)
	{
		var h = list.Where(x => x.id == i).FirstOrDefault();
	}
}

void TestDictionary(List<Human> list, int iterations)
{
	var dict = new Dictionary<int, Human>();
	foreach (var h in list)
	{
		dict.Add(h.id, h);
	}
	for (int i = 0; i < iterations; i++)
	{
		var h = dict[i];
	}
}

void TestToDictionary(List<Human> list, int iterations)
{
	var dict = list.ToDictionary(x => x.id);
	for (int i = 0; i < iterations; i++)
	{
		var h = dict[i];
	}
}

void TestLookup(List<Human> list, int iterations)
{
	var lookup = list.ToLookup(
		x => x.id,
		x => x);
	for (int i = 0; i < iterations; i++)
	{
		var h = lookup[i];
	}
}

Here are the results:

Avg of .Where search: 987.89 ms
Avg of for-built Dictionary search: 1.85 ms
Avg of LINQ-built Dictionary search: 1.67 ms
Avg of Lookup search: 2.14 ms

I would say that the results are what I expected in terms of what performed best. I was surprised by just how poorly the .Where queries performed, though–it was awful! One note about the manually-built dictionary versus the one produced by LINQ’s ToDictionary method: in repeated tests, the better performing method was inconsistent, leading me to believe that there is no significant benefit or disadvantage to using one or the other. I’ll likely stick with ToDictionary in the future due to its brevity, though.

These results seem to prove that a dictionary is optimal for lookups when key uniqueness is guaranteed. If the key is not unique or its uniqueness is questionable, a lookup should be used instead. Never do what I wanted to do, though, and use a .Where as an inner-loop lookup retrieval mechanism.

12/10/2012 Update:
A co-worker pointed out that I don’t need to chain Where and FirstOrDefault. Instead, I can just use FirstOrDefault with a lambda. So I added this to the test app to see how it compared. Surprisingly, this seems to consistently run slower than using Where in conjunction with FirstOrDefault!

void TestFirstOrDefault(List<Human> list, int iterations)
{	
	for (int i = 0; i < iterations; i++)
	{
		var h = list.FirstOrDefault(x => x.id == i);
	}
}

We also agreed that there should be a for-each loop as a base comparison, so I added that as well.

void TestForEach(List<Human> list, int iterations)
{
	for (int i = 0; i < iterations; i++)
	{
		foreach (var x in list)
		{
			if (i == x.id)
			{
				break;
			}
		}
	}
}

Here are the full results with the two new algorithms:

Avg of ForEach search: 741.05 ms
Avg of .Where search: 980.13 ms
Avg of .FirstOrDefault search: 1189.01 ms
Avg of for-built Dictionary search: 1.57 ms
Avg of LINQ-built Dictionary search: 1.57 ms
Avg of Lookup search: 1.74 ms

**********
Complete code:

void Main()
{
	var iterations = 10000;
	var list = new List<Human>();
	for (int i = 0; i < iterations; i++)
	{
		list.Add(new Human(i));
	}
	
	var timesToAvg = 100;
	
	Console.WriteLine("Avg of ForEach search: {0} ms", 
		AverageIt((l, i) => TestForEach(l, i), list, iterations, timesToAvg));
	
	Console.WriteLine("Avg of .Where search: {0} ms", 
		AverageIt((l, i) => TestWhere(l, i), list, iterations, timesToAvg));
		
	Console.WriteLine("Avg of .FirstOrDefault search: {0} ms", 
		AverageIt((l, i) => TestFirstOrDefault(l, i), list, iterations, timesToAvg));
	
	Console.WriteLine("Avg of for-built Dictionary search: {0} ms", 
		AverageIt((l, i) => TestDictionary(l, i), list, iterations, timesToAvg));
		
	Console.WriteLine("Avg of LINQ-built Dictionary search: {0} ms", 
		AverageIt((l, i) => TestToDictionary(l, i), list, iterations, timesToAvg));
		
	Console.WriteLine("Avg of Lookup search: {0} ms", 
		AverageIt((l, i) => TestLookup(l, i), list, iterations, timesToAvg));
}

decimal AverageIt(Action<List<Human>, int> action, List<Human> list, int iterations, int timesToAvg)
{
	var sw = new Stopwatch();
	
	decimal sum = 0;
	for (int i = 0; i < timesToAvg; i++)
	{
		sw.Reset();
		sw.Start();
		action(list, iterations);
		sw.Stop();
		sum += sw.ElapsedMilliseconds;
	}
	return sum / timesToAvg;
}

class Human
{
	public int id;
	
	public Human(int id)
	{
		this.id = id;
	}
}

void TestForEach(List<Human> list, int iterations)
{
	for (int i = 0; i < iterations; i++)
	{
		foreach (var x in list)
		{
			if (i == x.id)
			{
				break;
			}
		}
	}
}

void TestWhere(List<Human> list, int iterations)
{	
	for (int i = 0; i < iterations; i++)
	{
		var h = list.Where(x => x.id == i).FirstOrDefault();
	}
}

void TestFirstOrDefault(List<Human> list, int iterations)
{	
	for (int i = 0; i < iterations; i++)
	{
		var h = list.FirstOrDefault(x => x.id == i);
	}
}

void TestDictionary(List<Human> list, int iterations)
{
	var dict = new Dictionary<int, Human>();
	foreach (var h in list)
	{
		dict.Add(h.id, h);
	}
	for (int i = 0; i < iterations; i++)
	{
		var h = dict[i];
	}
}

void TestToDictionary(List<Human> list, int iterations)
{
	var dict = list.ToDictionary(x => x.id);
	for (int i = 0; i < iterations; i++)
	{
		var h = dict[i];
	}
}

void TestLookup(List<Human> list, int iterations)
{
	var lookup = list.ToLookup(
		x => x.id,
		x => x);
	for (int i = 0; i < iterations; i++)
	{
		var h = lookup[i];
	}
}

.NET DateTime to W3C Format

Here’s a fun quickie! I needed to translate a C# DateTime into W3C format. Here’s how I did it, courtesy of StackOverflow:

DateTime.Now.ToString("yyyy-MM-ddTHH:mm:ss.fffffffzzz");
// 2012-12-04T17:10:12.5880605-05:00

Shablam!

Support for Zip Archives in .NET 4.5

I was catching up on some MSDN Magazines that have been piled up and collecting dust for a few months, and I found a nice little article titled What’s New in the .NET 4.5 Base Class Library. The biggest news is the simplified asynchronous programming. This is huge, but I’ve had enough of it shoved down my throat since first hearing about it at PDC in 2010. Now, that’s not to say that I’m not excited about it; it’s just old news for “what’s new” to me.

I kept reading, though, and came across a section about new support for zip archives. I don’t do a lot with zip files, but it does come up from time to time. In the past, I’ve always been surprised that this wasn’t something natively supported in .NET. I’ve had to use open-source solutions like SharpZipLib (GPL) and DotNetZip (Ms-PL), but I always felt like I shouldn’t need a 3rd party library. It looks as though Microsoft agreed with that sentiment.

It seemed pretty cool and easy enough to use, so I wanted to check it out immediately. Here are some quick examples of how to take advantage of some of this new functionality in .NET 4.5. I created a WPF application that allows you to do each of the functions listed below. Note that the code samples reference some additional methods that aren’t included here. You can view the complete source on GitHub.

Extract an entire archive

private void OnExtractArchive(object sender, RoutedEventArgs e)
{
    var archive = PromptForOpenFile(
        string.Empty, ".zip", "Zip archives (.zip)|*.zip");
    if (string.IsNullOrEmpty(archive))
        return;

    var destination = PromptForDirectory();

    ZipFile.ExtractToDirectory(archive, destination);
}

Extract a single file

private void OnExtractFile(object sender, RoutedEventArgs e)
{
    var archive = PromptForOpenFile(
        string.Empty, ".zip", "Zip archives (.zip)|*.zip");
    if (string.IsNullOrEmpty(archive))
        return;

    using (ZipArchive zipArchive = ZipFile.Open(archive, ZipArchiveMode.Read))
    {
        var itemToExtract = PromptForArchiveEntry(zipArchive);
        if (itemToExtract == null)
            return;

        var target = PromptForSaveFile(
            itemToExtract.FullName, string.Empty, "All files (.*)|*.*");

        using (var fs = new FileStream(target, FileMode.Create))
        {
            using (var contents = itemToExtract.Open())
            {
                contents.CopyToAsync(fs);
            }
        }
    }
}

Create an archive from a directory

private void OnCreateArchive(object sender, RoutedEventArgs e)
{
    var dir = PromptForDirectory();
    var target = PromptForSaveFile(
        "Archive.zip", ".zip", "Zip archives (.zip)|*.zip");
    ZipFile.CreateFromDirectory(dir, target);
}

Add a single file to an archive

private void OnAddFileToArchive(object sender, RoutedEventArgs e)
{
    var archive = PromptForOpenFile(
        string.Empty, ".zip", "Zip archives (.zip)|*.zip");
    if (string.IsNullOrEmpty(archive))
        return;

    var file = PromptForOpenFile(
        string.Empty, ".*", "All files (.*)|*.*");
    if (string.IsNullOrEmpty(archive))
        return;

    using (ZipArchive zipArchive = ZipFile.Open(archive, ZipArchiveMode.Update))
    {
        var name = Path.GetFileName(file);
        zipArchive.CreateEntryFromFile(file, name);
    }
}

Create a GeoRSS Feed in .NET

Last week, I was working on a small team project that leveraged ESRI’s ArcGIS Viewer for Silverlight. We wanted to plot points on the map using latitude and longitude coordinates, something that the viewer supports natively through its GeoRSS widget; all we needed to do is provide a GeoRSS feed!

The RSS feed’s just an XML document, and GeoRSS is just a specific format for an RSS feed. So, it should be no problem to do. I hadn’t created an RSS feed before, so I started by Googling. I figured I’d end up building an XML document using Linq to XML and writing the contents to a page. It was even easier than that, though. The .NET Framework has RSS classes built right in!

Here’s a very simple example of how to create a GeoRSS feed in .NET by using the Rss20FeedFormatter class:

public Rss20FeedFormatter GetItems()
{
    var feed = new SyndicationFeed(
        “My GeoRSS Feed”,
        “A silly little feed”,
        new Uri(“https://adamprescott.net/georss”));

    var items = GetItemsFromDataSource();
    feed.Items = items.Select(
        x =>
        {
            var f = new SyndicationItem(x.Title, x.Description, x.Link, x.Id, DateTime.Now);
            f.PublishDate = x.PubDate;
            f.Summary = new TextSyndicationContent(x.Description);

            XNamespace geons = “http://www.w3.org/2003/01/geo/wgs84_pos#”;
            var lat = new XElement(geons + “lat”, x.Latitude);
            f.ElementExtensions.Add(new SyndicationElementExtension(lat));
            var lon = new XElement(geons + “long”, x.Longitude);
            f.ElementExtensions.Add(new SyndicationElementExtension(lon));
            return f;
        });

    return new Rss20FeedFormatter(feed);
}

Taking it one step further, we needed to host our feed and make it accessible via a web service call.

So, we created an interface…

[ServiceContract]
public interface IRssFeed
{
    [OperationContract]
    [WebGet]
    Rss20FeedFormatter GetItems();
}

And created a new WCF endpoint in our web.config (did I mention this was an ASP.NET application?)…

<system.serviceModel>
  <behaviors>
    <endpointBehaviors>
      <behavior name=“webHttpBehavior”>
        <webHttp />
      </behavior>
    </endpointBehaviors>
  </behaviors>
  <services>
    <service name=“adamprescott.net.GeoRSS.MyItemsFeed”>
      <endpoint binding=“webHttpBinding”
				behaviorConfiguration=“webHttpBehavior”
				contract=“adamprescott.net.GeoRSS.IRssFeed” />
    </service>
  </services>
</system.serviceModel>

And voila–we were done! Browsing to “http://ourserver/ourapplication/MyItemsFeed/GetItems&#8221; gave us our valid GeoRSS feed, and we fed that to our ArcGIS Viewer to plot the points. It all worked swimmingly!