Olive Software e-Editions on Surface RT

Freep_eEdition

I decided to sign up for the newspaper this year, and I’ve really been enjoying it. I’ve said before that I’m a CNN junkie, but you really don’t get a lot of news from CNN. In the mornings, they tend to repeat the same 5-10 minutes worth of stories over and over again–presumably catering to the morning rushers that flip on the TV for a few minutes while they’re getting ready. The newspaper gives you SO MUCH MORE information. It’s really great. If you’re like me and haven’t (hadn’t?) picked up a paper in the last decade, maybe it’s time to give it another shot. But I digress…

Along with the physical paper that comes 3 times a week, I also have access to the paper daily via the e-Edition. But there’s a problem: it doesn’t work on my Surface.

The reason it doesn’t work is because it’s Flash-based. IE10 on Windows RT supports Flash, but sites must be whitelisted. So, my the e-Edition will work fine, but the e-Edition’s vendor, Olive Software, needs to provide code to Microsoft to get themselves on the whitelist. That seems like the easiest solution for Olive, but what I’d really like to see is a Detroit Free Press app in the Windows Store. I’d love to have something similar to the New York Times app with my local, Detroit news.

Advertisements

Surface RT Mutes Itself

While typing on my Surface yesterday, I noticed something unusual: the volume kept muting itself. I’d noticed this before, but I figured I just accidentally hit a button. Yesterday was different, though. The volume muted 3 times in a 15-minute span!

The first thing I did was hit the ‘net. Has anybody else experienced this? Yes, there are a lot of forum posts. People seemed to be pointing the finger at Touch Cover and suggesting that you contact Microsoft for a replacement.

I contacted Microsoft support and was told that replacing the cover was not found to fix the issue. I’m hoping it’s a software issue that can be resolved and pushed out in a hotfix, but I’m skeptical. It does seem to occur primarily when I interact with the keyboard.

It’s not a huge issue since the workaround is to simply unmute it, but it is annoying. If anybody hears about a solution, I’d love to hear!

 

Triage Android Battery Drain with BetterBatteryStats

I’ve really been digging JellyBean on my Galaxy S2, but the battery life has been a dagger. With my previous Gingerbread ROM, I was easily getting about 1.5-2 days of normal-use on a single charge. Since upgrading to CyanogenMod 10 (Beta 2), I’ve only been getting 8-10 hours, and most of that is standby! I’m never away from a convenient charging location for more than a few hours, so this is more of an annoyance than a deal-breaker, but I’d still like to get it fixed.

I thought that maybe the battery drain had to do with CM10, so I switched to AOKP over the weekend. The good news is that I like it just as much as CM10, but I didn’t really see any improvement in battery life. Using the standard battery details in Android settings, I was able to see that the vast majority of my battery—70 to 80%–was being used by Android OS, but that’s as much detail as you get. That’s not very helpful, and so I turned to BetterBatteryStats.

BetterBatteryStats

Looking at the screenshots, I wasn’t sure how useful the app was going to be. It was also $3, which is pricey fare for the app store. The app was recommended on many-a-forum, though, and I decided to take the plunge. I’m happy I did.

It’s easy to see what’s eating up your battery by looking at the Partial Wakelocks. Right away, I could see that NetworkLocationService was a major culprit. After just a few minutes of googling, I found that this service was linked to the Wi-Fi & Mobile network location setting in Settings > Location Access. I had originally linked this setting just to wi-fi. I knew I should keep wi-fi turned off when I wasn’t using it to conserve battery life, and I (incorrectly) assumed that the location service would be disabled-by-proxy when wi-fi was off. Upon re-reading the setting, it was using mobile data to update my location. Battery life is a much bigger priority to me than location-based services, so I turned off the setting and improved battery life significantly.

A few hours later, I decided to check the battery stats again. This time, it was the GPS. Like wi-fi, I knew that I was supposed to keep GPS turned off to conserve battery life. I got tricked by the fact that the GPS icon in the status bar was only visible for certain apps, though. I turned it off and boom–more battery life.

If you’re experiencing battery woes, I’d give BetterBatteryStats a look. It was well worth the $3 investment for me. In a day of monitoring and tweaking, I’ve gone from under 10 hours to over 20.

Renumber Enums with Regular Expressions

We had a widely-used assembly with an enumeration that did not have explicitly assigned values that was being released from multiple branches and causing problems. In an effort to keep the enumerations synchronized across projects, explicit values were added. The problem is that the values started at 1, whereas the implicit counter starts at 0. The solution is simple: renumber ’em to start at 0. Sounds like a job for regular expressions!

I was really hoping that I could do this using regular expressions in VS2012’s find & replace, but I just couldn’t find a way to implement the necessary arithmetic. After floundering for 15 minutes or so, I decided to just write a simple script in LINQPad. Here’s what I came up with, and it works fantastically.

var filename = @"C:\source\MehType.cs";

var contents = string.Empty;
using (var fs = new FileStream(filename, FileMode.Open))
{
    using (var sr = new StreamReader(fs))
    {
        contents = sr.ReadToEnd();
    }
}

var regex = new Regex(@"(.*?= )(\d+)");
foreach (Match match in regex.Matches(contents))
{
    var num = int.Parse(match.Groups[2].Value);
    contents = contents.Replace(
        match.Value, match.Result("${1}" + --num));
}

using (var fs = new FileStream(filename, FileMode.Create))
{
    using (var sw = new StreamWriter(fs))
    {
        sw.Write(contents);
        sw.Flush();
    }
}

The result is that this…

public enum MehType
{
    Erhmm = 1,
    Glurgh = 2,
    Mfhh = 3
}

…becomes this…

public enum MehType
{
    Erhmm = 0,
    Glurgh = 1,
    Mfhh = 2
}

SkyDrive Desktop App

WelcomeToSkyDrive

Now that I’m all-in on the Surface-Office-SkyDrive solution, I need better access to SkyDrive from my PCs. No problem, there’s a SkyDrive desktop application. Get it here.

It’s everything you want and expect. Installation’s a breeze—just enter your credentials and pick a folder to sync to. Within seconds, the contents of my SkyDrive account was available locally. I can save to the folder, and it uploads automagically. (PS—I just learned that Word recognizes “automagically” as a correctly spelled word. Nice.)

There’s also an option to make any file on the computer available through SkyDrive. I didn’t enable it because I was surprised and got scared, but I might go back and check it out later.

Naming and Capitalization Conventions

CodeSmellz_001_Capitalization

A small group of us were doing a code review, and the topic of capitalization and naming conventions came up. I’m very rigid in my ways, and super anal about making sure everything is correct and consistent. Unfortunately, at work we don’t have any official standards documentation to define the style we use. There are some general patterns that are followed, but some of the more controversial topics–like use of var–are left to the preference of the developer. I knew that MSDN had published naming guidelines, so I dug them up to present to the group. Now when we review code, we can nitpick names and capitalization using the argument that “our standard is to follow Microsoft’s guidance.” That’s good with me!

The specific topic that got us started down this path was capitalization for acronyms, and Microsoft offers three rules for dealing with them:

  1. Do capitalize both characters of two-character acronyms, except the first word of a camel-cased identifier. (e.g., DBRate, ioChannel)
  2. Do capitalize only the first character of acronyms with three or more characters, except the first word of a camel-cased identifier. (e.g., XmlWriter, htmlReader)
  3. Do not capitalize any of the characters of any acronyms, whatever their length, at the beginning of a camel-cased identifier. (e.g., xmlStream, dbServerName)

Ahh, just how I like it.

An additional distinction is made for abbreviations. It advises that, generally, abbreviations should not be used in library names. Two exceptions are noted, though: ID and OK. These are acceptable to use in an identifier name and should follow the same casing rules as regular words. In other words, Id and Ok for Pascal-case and id and ok for camel-case.

While we’re on the topic of capitalization, here’s a list of conventions that I follow in terms of Pascal-case versus camel-case. Violating these rules is a good way to irritate me

  • Class: Pascal
  • Property: Pascal
  • Parameter: Camel
  • Private instance field: Camel, prefixed with “_” (no official guidance; I’m flexible on this one)
  • Public/internal/protected instance field: N/A (use Property instead; read more)
  • Event: Pascal
  • Local variable: Camel
  • Enum types and values: Pascal

What to read more? (I know; I love this stuff, too!) Check out the complete guidance here.

Performance Profiling for Unit Tests

When I first got my hands on Visual Studio 2012, I was trying out every feature I could, new and old alike. I love the new Test Explorer. Code coverage is SO much better than it was before. Code analysis for code clones is a terrific, too. The only thing I wasn’t happy about was performance analysis.

The reason I wasn’t happy with performance analysis is that I couldn’t use it on unit tests. Luckily, Microsoft fixed that with the release of Visual Studio 2012 Update 1. Now you can right-click a test and choose Profile Test to run performance analysis on a single unit test, and it is awesome!

ProfileTest_ContextMenu

When you choose to profile a test, the test runs as usual followed by the analysis. When analysis completes, you’re presented with a summary view that shows you CPU usage over time, the “hot path” call tree—a tree of the most active function calls where most of the work was performed—and a list of functions responsible for doing the most individual work.

ProfileTest_SummaryView

You can find out more and investigate resource spikes by selecting a time range on the graph and filtering. That’s all good and well, but what really blew me away was that you can click on the functions with the most individual work to drill into them. Drill into them? Yea—you’re taken to a view that shows you the selected function, its callers, and the methods it calls. There are percentages that show how much time was taken in each of the three areas (callers, current method, and callees), and you can click into any of the methods displayed to navigate up or down the call stack. The actual code for the current method is also displayed. The only thing that seemed sub-optimal is that I couldn’t edit the code directly; there’s a link to the actual file, though, so you’re only a click away from the editable code file.

ProfileTest_FunctionDetails

There are other, sortable views you can look at, too. You can view a call tree or breakdown by module, and you can get to the same function details view described above from each of those views. It’s a really useful, powerful experience.

Here’s where it gets really nuts, though: add SpecFlow to the mix. SpecFlow lets you write feature-based scenarios that are capable of serving as automated integration tests. The scenarios run like normal unit tests. You can right-click them in the Test Explorer to run performance analysis on them. This means that you can do targeted performance analysis on specific features of your application! To test this out, I sorted my unit tests by execution duration and analyzed the slowest. I was able to find a bottleneck with a caching algorithm used by nearly all of the other modules in the application. Execution time of the 350 unit tests in the project went from 50 seconds to 20. That’s a HUGE improvement from fixing one flaw found from running analysis on one function picked only because it was the most time-consuming in the slowest test.

Good tests are supposed to run quickly, and performance analysis is an invaluable tool to help you triage valuable-but-poor-performing tests. Also, since you’ve got automated tests, you can refactor and optimize the performance of your application with high confidence. If you haven’t used performance analysis before—more specifically, performance analysis for unit tests—give it a shot; I’d be blown away if you didn’t find incredible value.

The Office Apps Store

One of the things that is disappointing about being an early adopter of Windows RT is that the Windows App Store is a bit sparse. This really isn’t any different from what I went through with Windows Phone 7, and I’m not surprised by it. I know the apps are coming—I just want to know when!
So, in an effort to find out when more apps were coming, I was poking around online. I didn’t find what I was looking for, but I did find interesting that might eventually turn into something useful: the Office Apps Store. I say “eventually” because I didn’t see anything that seemed particularly useful right now, but it is a fun idea with lots of possibilities.

OfficeAppStore_1

Just to try it out, I installed the Merriam-Webster Dictionary app. Installation is reasonably simple: just click the link from the store, and you’re done. The app is then accessible from the corresponding Office app’s ribbon.

OfficeAppStore_2

There’s nothing revolutionary about the app. It’s a dictionary. You can look up words. But, it works like it should, and I’m happy to have it. I was pleased to find that right-clicking a word and selecting Define looks up the word in the app—that was nice.

OfficeApps_Ribbon

OfficeApps_Dialog

That’s all for now. I’ll be keeping an eye on the store, checking in from time to time. Maybe something cool will turn up! Do you know about a cool Office app? Please share!

Post to Twitter with DotNetOpenAuth

DotNetOpenAuth_TwitterAPI

A few weeks ago, I started looking into using the Twitter API for automatic, event-based status updates in an application. I wanted to understand what was going on, so I didn’t want to simply download Twitterizer or LINQ to Twitter. Learning OAuth has been a challenge, to put it lightly. I’ve learned a lot, but I’m still pretty clumsy with it. Today, I found out about a new open source project that seems like just what I needed: DotNetOpenAuth.

Using DotNetOpenAuth, I was able to create a functional console application that posts to Twitter in about 200 lines of code. This post will walk you through the steps.

The first thing you need to do is create a new Twitter application. Go to dev.twitter.com, sign in (or sign up), and create the application. If you want to post status updates with your application—like we’re doing here—be sure to click to the Settings tab and change the Application Type to Read and Write.

Once you’ve created your application with Twitter, it’s time to create your project in Visual Studio. I’ll be using a console application in an effort to keep it simple. The first thing you’ll want to do after creating the project is install the DotNetOpenAuth NuGet package. (Not sure how to use NuGet? Start here!)

Now it’s time to get down to business. We’re going to start by creating a token manager. Most of the tutorials online seem to use a simple, in-memory token manager, and I’m going to follow suit. In a real application, you’ll want to store the access tokens and access token secrets so that you don’t have to authorize each time the application runs.

namespace adamprescott.net.TweetConsole
{
    using DotNetOpenAuth.OAuth.ChannelElements;
    using DotNetOpenAuth.OAuth.Messages;
    using DotNetOpenAuth.OpenId.Extensions.OAuth;
    using System;
    using System.Collections.Generic;

    public class TokenManager : IConsumerTokenManager
    {
        private static Dictionary<string, string> TokenSecrets = 
            new Dictionary<string, string>();

        public TokenManager(string consumerKey, string consumerSecret)
        {
            ConsumerKey = consumerKey;
            ConsumerSecret = consumerSecret;
        }

        public string ConsumerKey { get; private set; }

        public string ConsumerSecret { get; private set; }

        public string GetTokenSecret(string token)
        {
            return TokenSecrets[token];
        }

        public void StoreNewRequestToken(UnauthorizedTokenRequest request,
            ITokenSecretContainingMessage response)
        {
            TokenSecrets[response.Token] = response.TokenSecret;
        }

        public void ExpireRequestTokenAndStoreNewAccessToken(
            string consumerKey,
            string requestToken,
            string accessToken,
            string accessTokenSecret)
        {
            TokenSecrets.Remove(requestToken);
            TokenSecrets[accessToken] = accessTokenSecret;
        }

        public TokenType GetTokenType(string token)
        {
            throw new NotImplementedException();
        }

        public void StoreOpenIdAuthorizedRequestToken(string consumerKey,
            AuthorizationApprovedResponse authorization)
        {
            TokenSecrets[authorization.RequestToken] = String.Empty;
        }
    }
}

The next thing we need is a consumer wrapper. This wrapper is where we’ll specify the OAuth token URLs and expose three methods that we’ll use from our main application: BeginAuth, CompleteAuth, and PrepareAuthorizedRequest.

namespace adamprescott.net.TweetConsole
{
    using DotNetOpenAuth.Messaging;
    using DotNetOpenAuth.OAuth;
    using DotNetOpenAuth.OAuth.ChannelElements;
    using System.Collections.Generic;
    using System.Net;

    public class TwitterConsumer
    {
        private string _requestToken = string.Empty;

        public DesktopConsumer Consumer { get; set; }
        public string ConsumerKey { get; set; }
        public string ConsumerSecret { get; set; }

        public TwitterConsumer(string consumerKey, string consumerSecret)
        {
            ConsumerKey = consumerKey;
            ConsumerSecret = consumerSecret;

            var providerDescription = new ServiceProviderDescription
            {
                RequestTokenEndpoint = new MessageReceivingEndpoint(
                    "https://api.twitter.com/oauth/request_token",
                    HttpDeliveryMethods.PostRequest),
                UserAuthorizationEndpoint = new MessageReceivingEndpoint(
                    "https://api.twitter.com/oauth/authorize",
                    HttpDeliveryMethods.GetRequest),
                AccessTokenEndpoint = new MessageReceivingEndpoint(
                    "https://api.twitter.com/oauth/access_token", 
                    HttpDeliveryMethods.GetRequest),
                TamperProtectionElements = new ITamperProtectionChannelBindingElement[] 
                {
                    new HmacSha1SigningBindingElement()
                }
            };

            Consumer = new DesktopConsumer(
                providerDescription,
                new TokenManager(ConsumerKey, ConsumerSecret));
            return;
        }

        public string BeginAuth()
        {
            var requestArgs = new Dictionary<string, string>();
            return Consumer
                .RequestUserAuthorization(requestArgs, null, out _requestToken)
                .AbsoluteUri;
        }

        public string CompleteAuth(string verifier)
        {
            var response = Consumer.ProcessUserAuthorization(
                _requestToken, verifier);
            return response.AccessToken;
        }

        public HttpWebRequest PrepareAuthorizedRequest(
            MessageReceivingEndpoint endpoint,
            string accessToken, 
            IEnumerable<MultipartPostPart> parts)
        {
            return Consumer.PrepareAuthorizedRequest(endpoint, accessToken, parts);
        }

        public IConsumerTokenManager TokenManager
        {
            get
            {
                return Consumer.TokenManager;
            }
        }
    }
}

All that’s left to do now is put it all together. The main application needs your Twitter application’s consumer key and consumer secret. (Both of those values can be found on the Details tab of the Twitter application.) Those values are passed to the consumer wrapper which can then produce an authorization URL. We’ll prompt the user for credentials by opening the URL in a web browser. The authorization process will be completed when the user enters their PIN from Twitter into the console application. Once authorized, the application can post to Twitter on behalf of the user. I added a simple loop that prompts the user and tweets their input.

namespace adamprescott.net.TweetConsole
{
    using DotNetOpenAuth.Messaging;
    using System;
    using System.Diagnostics;

    class Program
    {
        const string _consumerKey = "~consumerkey~";
        const string _consumerSecret = "~consumersecret~";
        private TwitterConsumer _twitter;

        static void Main(string[] args)
        {
            var p = new Program();
            p.Run();
        }

        public Program()
        {
            _twitter = new TwitterConsumer(_consumerKey, _consumerSecret);
        }

        void Run()
        {
            var url = _twitter.BeginAuth();
            Process.Start(url);
            Console.Write("Enter PIN: ");
            var pin = Console.ReadLine();
            var accessToken = _twitter.CompleteAuth(pin);

            while (true)
            {
                Console.Write("Tweet ('x' to exit) /> ");
                var tweet = Console.ReadLine();
                if (string.Equals("x", tweet, StringComparison.CurrentCultureIgnoreCase))
                {
                    break;
                }
                Tweet(accessToken, tweet);
            }
        }

        void Tweet(string accessToken, string message)
        {
            var endpoint = new MessageReceivingEndpoint(
                "https://api.twitter.com/1.1/statuses/update.json",
                HttpDeliveryMethods.PostRequest | HttpDeliveryMethods.AuthorizationHeaderRequest);

            var parts = new[]
            {
                MultipartPostPart.CreateFormPart("status", message)
            };

            var request = _twitter.PrepareAuthorizedRequest(endpoint, accessToken, parts);

            var response = request.GetResponse();
        }
    }
}

The full source code for this sample is available on GitHub. Note that you’ll need to provide your application’s consumer key and secret in order to make the sample functional.

Collection Lookups

FindInCollection

Yesterday, I was discussing a method with a co-worker where I suggested we loop through a collection of records and, for each record, do another retrieval-by-ID via LINQ. He brought up that this would probably be done more efficiently by creating a dictionary before the loop and retrieving from the dictionary instead of repeatedly executing the LINQ query. So I decided to do some research.

Firstly, I learned about two new LINQ methods: ToDictionary and ToLookup. Lookups and dictionaries serve a similar purpose, but the primary distinction is that a lookup will allow duplicate keys. Check out this article for a quick comparison of the two structures.

With my new tools in hand, I wanted to compare the performance. I first came up with a test. I created a collection of simple objects that had an ID and then looped through and retrieved each item by ID. Here’s what the test looks like:

void Main()
{
	var iterations = 10000;
	var list = new List<Human>();
	for (int i = 0; i < iterations; i++)
	{
		list.Add(new Human(i));
	}
	
	var timesToAvg = 100;
	
	Console.WriteLine("Avg of .Where search: {0} ms", 
		AverageIt((l, i) => TestWhere(l, i), list, iterations, timesToAvg));
	
	Console.WriteLine("Avg of for-built Dictionary search: {0} ms", 
		AverageIt((l, i) => TestDictionary(l, i), list, iterations, timesToAvg));
		
	Console.WriteLine("Avg of LINQ-built Dictionary search: {0} ms", 
		AverageIt((l, i) => TestToDictionary(l, i), list, iterations, timesToAvg));
		
	Console.WriteLine("Avg of Lookup search: {0} ms", 
		AverageIt((l, i) => TestLookup(l, i), list, iterations, timesToAvg));
}

decimal AverageIt(Action<List<Human>, int> action, List<Human> list, int iterations, int timesToAvg)
{
	var sw = new Stopwatch();
	
	decimal sum = 0;
	for (int i = 0; i < timesToAvg; i++)
	{
		sw.Reset();
		sw.Start();
		action(list, iterations);
		sw.Stop();
		sum += sw.ElapsedMilliseconds;
	}
	return sum / timesToAvg;
}

class Human
{
	public int id;
	
	public Human(int id)
	{
		this.id = id;
	}
}

Then, I wrote a method for each algorithm I wanted to test: using .Where, using a manually-built dictionary, using a ToDictionary-built dictionary, and using a lookup. Here are the methods I wrote for each of the algorithms:

void TestWhere(List<Human> list, int iterations)
{	
	for (int i = 0; i < iterations; i++)
	{
		var h = list.Where(x => x.id == i).FirstOrDefault();
	}
}

void TestDictionary(List<Human> list, int iterations)
{
	var dict = new Dictionary<int, Human>();
	foreach (var h in list)
	{
		dict.Add(h.id, h);
	}
	for (int i = 0; i < iterations; i++)
	{
		var h = dict[i];
	}
}

void TestToDictionary(List<Human> list, int iterations)
{
	var dict = list.ToDictionary(x => x.id);
	for (int i = 0; i < iterations; i++)
	{
		var h = dict[i];
	}
}

void TestLookup(List<Human> list, int iterations)
{
	var lookup = list.ToLookup(
		x => x.id,
		x => x);
	for (int i = 0; i < iterations; i++)
	{
		var h = lookup[i];
	}
}

Here are the results:

Avg of .Where search: 987.89 ms
Avg of for-built Dictionary search: 1.85 ms
Avg of LINQ-built Dictionary search: 1.67 ms
Avg of Lookup search: 2.14 ms

I would say that the results are what I expected in terms of what performed best. I was surprised by just how poorly the .Where queries performed, though–it was awful! One note about the manually-built dictionary versus the one produced by LINQ’s ToDictionary method: in repeated tests, the better performing method was inconsistent, leading me to believe that there is no significant benefit or disadvantage to using one or the other. I’ll likely stick with ToDictionary in the future due to its brevity, though.

These results seem to prove that a dictionary is optimal for lookups when key uniqueness is guaranteed. If the key is not unique or its uniqueness is questionable, a lookup should be used instead. Never do what I wanted to do, though, and use a .Where as an inner-loop lookup retrieval mechanism.

12/10/2012 Update:
A co-worker pointed out that I don’t need to chain Where and FirstOrDefault. Instead, I can just use FirstOrDefault with a lambda. So I added this to the test app to see how it compared. Surprisingly, this seems to consistently run slower than using Where in conjunction with FirstOrDefault!

void TestFirstOrDefault(List<Human> list, int iterations)
{	
	for (int i = 0; i < iterations; i++)
	{
		var h = list.FirstOrDefault(x => x.id == i);
	}
}

We also agreed that there should be a for-each loop as a base comparison, so I added that as well.

void TestForEach(List<Human> list, int iterations)
{
	for (int i = 0; i < iterations; i++)
	{
		foreach (var x in list)
		{
			if (i == x.id)
			{
				break;
			}
		}
	}
}

Here are the full results with the two new algorithms:

Avg of ForEach search: 741.05 ms
Avg of .Where search: 980.13 ms
Avg of .FirstOrDefault search: 1189.01 ms
Avg of for-built Dictionary search: 1.57 ms
Avg of LINQ-built Dictionary search: 1.57 ms
Avg of Lookup search: 1.74 ms

**********
Complete code:

void Main()
{
	var iterations = 10000;
	var list = new List<Human>();
	for (int i = 0; i < iterations; i++)
	{
		list.Add(new Human(i));
	}
	
	var timesToAvg = 100;
	
	Console.WriteLine("Avg of ForEach search: {0} ms", 
		AverageIt((l, i) => TestForEach(l, i), list, iterations, timesToAvg));
	
	Console.WriteLine("Avg of .Where search: {0} ms", 
		AverageIt((l, i) => TestWhere(l, i), list, iterations, timesToAvg));
		
	Console.WriteLine("Avg of .FirstOrDefault search: {0} ms", 
		AverageIt((l, i) => TestFirstOrDefault(l, i), list, iterations, timesToAvg));
	
	Console.WriteLine("Avg of for-built Dictionary search: {0} ms", 
		AverageIt((l, i) => TestDictionary(l, i), list, iterations, timesToAvg));
		
	Console.WriteLine("Avg of LINQ-built Dictionary search: {0} ms", 
		AverageIt((l, i) => TestToDictionary(l, i), list, iterations, timesToAvg));
		
	Console.WriteLine("Avg of Lookup search: {0} ms", 
		AverageIt((l, i) => TestLookup(l, i), list, iterations, timesToAvg));
}

decimal AverageIt(Action<List<Human>, int> action, List<Human> list, int iterations, int timesToAvg)
{
	var sw = new Stopwatch();
	
	decimal sum = 0;
	for (int i = 0; i < timesToAvg; i++)
	{
		sw.Reset();
		sw.Start();
		action(list, iterations);
		sw.Stop();
		sum += sw.ElapsedMilliseconds;
	}
	return sum / timesToAvg;
}

class Human
{
	public int id;
	
	public Human(int id)
	{
		this.id = id;
	}
}

void TestForEach(List<Human> list, int iterations)
{
	for (int i = 0; i < iterations; i++)
	{
		foreach (var x in list)
		{
			if (i == x.id)
			{
				break;
			}
		}
	}
}

void TestWhere(List<Human> list, int iterations)
{	
	for (int i = 0; i < iterations; i++)
	{
		var h = list.Where(x => x.id == i).FirstOrDefault();
	}
}

void TestFirstOrDefault(List<Human> list, int iterations)
{	
	for (int i = 0; i < iterations; i++)
	{
		var h = list.FirstOrDefault(x => x.id == i);
	}
}

void TestDictionary(List<Human> list, int iterations)
{
	var dict = new Dictionary<int, Human>();
	foreach (var h in list)
	{
		dict.Add(h.id, h);
	}
	for (int i = 0; i < iterations; i++)
	{
		var h = dict[i];
	}
}

void TestToDictionary(List<Human> list, int iterations)
{
	var dict = list.ToDictionary(x => x.id);
	for (int i = 0; i < iterations; i++)
	{
		var h = dict[i];
	}
}

void TestLookup(List<Human> list, int iterations)
{
	var lookup = list.ToLookup(
		x => x.id,
		x => x);
	for (int i = 0; i < iterations; i++)
	{
		var h = lookup[i];
	}
}