Excel Football Squares Pool Generator

Update: You can find a downloadable version of the spreadsheet here.

The Super Bowl is right around the corner, and you know what that means: it’s time for some low-stakes office gambling! Yep, football squares. In case you aren’t familiar, the idea is that you blindly buy squares in a 10 by 10 grid. Once all the squares are sold, the rows and columns are randomly assigned values from 0 to 9. One team is assigned to the row values and the other to the column values. The last digit of each team’s score at the end of each quarter is then used to pick a square, and whoever owns that square wins some percentage of the proceeds.

What’s that, you say? Grids? Random numbers? This sounds like a job for Excel!

You can probably come up with a few different ways to do this, but I chose to generate two sets of ten random numbers. I then assign values to the rows and columns based on the row index of the 1st largest, 2nd largest, 3rd largest, etc. number in the list.

This is what my numbers grid looks like:

1 0.95708204 0 1 0.96026041 6
2 0.60117232 9 2 0.11468243 0
3 0.17740544 6 3 0.60526773 7
4 0.22298474 4 4 0.76106966 8
5 0.89427173 1 5 0.76476649 4
6 0.41933546 5 6 0.6840864 3
7 0.89535015 3 7 0.97857437 5
8 0.05144228 2 8 0.93037023 2
9 0.17105286 8 9 0.90558099 9
10 0.95489495 7 10 0.21003331 1

The first and fourth columns are just hand-entered values from 1 to 10. The second and fifth columns are just random numbers generated by =RAND(). The third and sixth columns are where the magic happens. The formula, shown below, finds the Nth largest value, where N is the hand-entered number from the first or fourth column, and then uses the MATCH function to return the index of that value in the grid. The indexes are 1-based, so I subtract 1 to ensure that my values range from 0 to 9. Here’s what the formula looks like:

=MATCH(LARGE(B$1:B$10,A1),B$1:B$10,0)-1

Now that I have two sets of random numbers, all I need to do is assign them to my squares grid. I did this just by adding a function to each cell (=C1, =C2, etc.). Bam, that’s all there is to it. No more drawing cards or pieces of paper out of a hat: just hit F9 to have Excel recalculate its formulas and generate the whole thing at once.

bar
foo 6 0 7 8 4 3 5 2 9 1
0
9
6
4
1
5
3
2
8
7

Custom Configuration Sections

So you want to create some custom config sections, but you don’t want to mess with custom classes that inherit from ConfigurationSection and ConfigurationElement? You just want some logical groups of key-value pairs? Well, friend, I’ve got some good news for you: it’s super easy!

To create a simple collection of key-value pair settings, you can use the NameValueSectionHandler class. Just add a section to your configuration file’s configSections, add your settings, and you’re good to go! (Note that your project will need a reference to System.Configuration.)

Here’s a sample config:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="Alabama" type="System.Configuration.NameValueSectionHandler" />
    <section name="Georgia" type="System.Configuration.NameValueSectionHandler" />
  </configSections>
  <Alabama>
    <add key="state" value="ALABAMA!" />
  </Alabama>
  <Georgia>
    <add key="state" value="GEORGIA!" />
  </Georgia>
</configuration>

And here’s the sample code to access the custom sections and settings:

using System;
using System.Collections.Specialized;
using System.Configuration;

namespace MultipleConfigSections
{
    class Program
    {
        static void Main(string[] args)
        {
            var alSection = ConfigurationManager.GetSection("Alabama") as NameValueCollection;
            Console.WriteLine(alSection["state"]);

            var gaSection = ConfigurationManager.GetSection("Georgia") as NameValueCollection;
            Console.WriteLine(gaSection["state"]);

            Console.WriteLine();
        }
    }
}

If you’re looking for something more sophisticated, you’ll probably want to check out this MSDN article for a quick example that uses ConfigurationSection and ConfigurationElement. Need a collection? You’ll probably need ConfigurationElementCollection, too.

Property Behavior in Rhino Mocks and Stubs

A co-worker presented me with a scenario where assigning and accessing a property from a Rhino Mocks stub worked as expected, but changing the stub to a dynamic mock caused the test to fail. This didn’t really seem like a stub-versus-mock question since stubs and dynamic mocks generally function similarly, with the difference being that mocks allow you to verify expectations whereas stubs only allow you to stub return values.

Here’s a simplified version of the problem:

[TestMethod]
public void TestWithStub()
{
    var sampleStub = MockRepository.GenerateStub<ISample>();
    sampleStub.Value = "bar";

    // success!
    Assert.AreEqual("bar", sampleStub.Value);
}

[TestMethod]
public void TestWithMock()
{
    var sampleMock = MockRepository.GenerateMock<ISample>();
    sampleMock.Value = "foo";

    // fail!
    Assert.AreEqual("foo", sampleMock.Value);
}

I’ve seen guidance online that suggests this is the correct way to handle stubbed properties. This is primarily what makes the issue confusing to me. However, per the Rhino Mocks 3.3 Quick Reference Guide, the correct way to handle property getters and setters for mocks is to use Expect.Call(foo.Name).Return(“Bob”) and Expect.Call(foo.Name = “Bob”), respectively. The quick reference guide also identifies a way to do “automatic properties” by using the PropertyBehavior method. The PropertyBehavior method allows properties in mocks to function like they do in stubs. The behavior is enabled on stubs be default, which is what causes the out-of-the-box behaviors to be different.

By adding a call to PropertyBehavior, the seemingly-correct sample test above succeeds.

[TestMethod]
public void TestWithMock()
{
    var sampleMock = MockRepository.GenerateMock<ISample>();
    sampleMock.Expect(x => x.Value).PropertyBehavior();
    sampleMock.Value = "foo";

    // success!
    Assert.AreEqual("foo", sampleMock.Value);
}

Note that although we use an Expect method, there is no expectation set when using the PropertyBehavior method. This means that the test will pass even if the property is not used at all. If you need to verify getters and setters, you should use mock.Expect(x => x.Value).Return(“foo”) and mock.Expect(x => x.Value = “bar”).

Pin Presentation Settings to Start in Win RT

PresentationSettingsPinned

One of the problems I’ve run into with my Surface RT is that the screen timeout interferes with a number of common tasks. I use my Surface to run my team’s morning standup meetings, and I have to keep touching the screen every 30 seconds to keep it active. Or, if I want to listen to music using Pandora or Google Music, the jams stop when the screen turns off. (I believe the music will continue to stream with the screen off if you’re plugged in, but I’m not usually plugged in.) I ran into a similar issue while watching a football game on Espn3.com.

No problem–there’s an application to control your computer’s presentation settings that’s built right into Windows. It’s trickier than you’d expect to create a shortcut to it, though.

Presentation Settings

I found the setting by going to the Start screen and typing Presentation. At first, it looks like you get no results, but that’s because the default search bucket is Applications and Presentation Settings is under Settings. Checking the setting I am currently giving a presentation will disable the system timers, allowing you to keep your Surface active for an extended period of time for an actual presentation or streaming music and video from the web. I need to access this pretty frequently, so I wanted to pin it to my Start screen, but, since it’s under Settings, Windows doesn’t let you select it for pinning. Blurg.

There is a way to do it, but it’s not very intuitive. When you launch the application from Settings, it fires up in desktop mode. With the application running in desktop mode, you can right-click it in the taskbar, then right-click Microsoft Mobile PC Presentation Adaptability Client, and choose Properties. It will open a dialog where you can then click a button to open the file location. You’ll be whisked off to C:\Windows\System32, and the executable–PresentationSettings.exe–will be selected. Voila! You can right-click the file and pin it to the Start screen.

PresentationSettings.exe

The ultra-abbreviated version of this post is to simply browse to C:\Windows\System32 and right-click PresentationSettings.exe. Note that the screen will still turn off if you flip the keyboard up to cover the screen while in presentation mode, so you won’t kill your battery if you forget to re-enable the timers before closing your Surface and chucking it in your backpack. If you leave it open on a desk, you probably will, though. To maximize your battery life, you’ll definitely want to re-enable timers if you don’t need to keep the computer awake for a specific reason.

Finding Time to Innovate

Innovation is the backbone of any software development effort. If you aren’t doing something new, what’s the point? Without new ideas, you’ll never be first, you’ll never have something that your competitors don’t, and you will never be the best.

I think most people would agree with these statements when talking about a software company or product, but I’m actually talking about developers. You–the developer–need to constantly explore new ideas and learn new skills. Doing things you’ve always done the way you’ve always done them will likely never result in anything more than marginally better than what you have now. (Hey, more = better, right?) In other words, you will need to do new things in new ways in order to produce something significantly better than what you have now.

This is where things get a little less straightforward. How do you learn to do things you haven’t done in ways you haven’t done them before? In my opinion, it’s all about exploration and experimentation. When I read an article about a new tool or language feature, I’ll spend some time playing around with it. I don’t necessarily have a use in mind; I just want to see what it’s about. The result over time is that I have a whole host of things I’ve messed around with that are implementation-detail candidates on future projects. Additionally, when discussing new projects, I can say things like, “This is useful, but it would be really cool if we added X. Here’s how we can do that.”

We can all agree that innovation is essential, and it’s important for developers to spend time learning and exploring to help with the innovating. There are only so many hours in the day, though, and you’ve probably got other, more important things to work on. You’d love to spend time trying new things, but your boss isn’t going let you have free time to do that. After all, there’s money to be made! So what can you do?

I work for a somewhat old-school software company. The senior management overseeing development isn’t likely to institute 20% time any time soon. They aren’t going to designate a percentage of hours as play time, but that doesn’t mean that creativity is forbidden or frowned upon. It just means you have to find the time yourself. I’d venture to say that most software developers are salaried employees obligated to 40 hours per week but expected to work more. What if you spend just one hour each day learning something new? You’ll probably still be giving 40+ hours of effort to the “actual work,” but you’ll also be learning new things that interest you. Fast math shows us that 5 hours represents 11% of a 45-hour week so, by taking 1 hour each day you effectively create “10% time” for yourself.  If one hour is too much time for you, take 30 minutes, or do it every other day. Take the time and stick it on the end of your lunch hour if you need a bigger block.

By allocating “you time” and spending it, you’re going to grow professionally, and that benefits both you and your employer. You’ll be better equipped to tackle complex problems in the future, and you’ll have fresh ideas for how to solve old problems. You’ll also have increased job satisfaction because you get to work on things that interest you in addition to your regular assignments. I believe it’s a true win-win scenario. So stop worrying about the hours you’re “given,” and go learn something!

VS2012 Debugger Jumping Around For VS2010 Unit Tests

One of the big selling points for Visual Studio 2012 to early adopters is that you can work seamlessly with Visual Studio 2010 projects and 2010 users. I’ve found this to be largely true and have only run into one significant problem. It’s related to debugging unit tests, and it’s a real head-scratcher. The symptom is that everything is hunky-dory in VS2010: you hit breakpoints, step through code, and everything works just like you’d expect. However, you get consistent weirdness in VS2012. You’ll stop on a breakpoint and try to step over a single line of code, but it’ll jump two lines ahead. The contents of variables will be inconsistent between the code and the next line to be executed. It’s really confusing.

From what I can gather from this forum post (you might want to skip to the bottom for solutions), the problem is due to how VS2010 instruments assemblies to measure code coverage. The solution that I found to work was to manually edit and remove the AgentRule section from the local.testsettings file.

I suspect the problem could also be solved by disabling code coverage from within VS2010 and then re-enabling it in VS2012. One of my favorite things about VS2012 is that you don’t need to manually instrument an assembly to analyze code coverage. Since VS2010 does need this, there is a special setting in vs2012 that can be enabled to maintain backward compatibility.

VS2012_CodeCoverageFor2010

So, just to recap, I suggest the following two solutions if you’re getting debugger jump-arounds while debugging VS2010 unit tests in VS2012:

  1. In Visual Studio 2010, disable code coverage. Then, in Visual Studio 2012, enable the Code Coverage (Visual Studio 2010) setting and select the assemblies to instrument.
  2. If solution #1 fails, edit your .testsettings file manually and remove the AgentRule section.

For either of the solutions above, be sure to clean your output directory. Make sure to get rid of all DLLs and PDBs.

Tactus Dynamic Surface

I was listening to NPR on the way to work, and there was a short story about a company named Tactus that makes a dynamic touch screen. When I say dynamic, I mean physically–it actually changes shape. Go look at the pictures on their website, and keep in mind that all those buttons are created on-demand. It’s pretty remarkable.

“We make this invisible network of channels, and fill these little channels with a special oil, and then we use simply a change in the pressure of that fluid to make the buttons appear and disappear,” says Tactus CEO Craig Ciesla.

This sounds pretty awesome. I’m imagining a tablet with an onscreen keyboard with raised buttons. Ever tried to play a game on your phone with an onscreen touch keypad? It’s hard. Your fingers slip off the buttons while you’re paying attention to the action, and then you lose because you’re not hitting the button you’re meaning to. Now imagine the same game on a screen that pops up buttons when they’re needed–how amazing would that be!?

It sounded cool on the radio, but you should definitely check out the video!

Surface Pro Coming This Month?

Rumors are flying that Microsoft will release the Pro version of its Surface tablet this month. The most interesting news I’ve seen on the topic is that Office 2013 won’t be included. As an MSDN subscriber, this wouldn’t be a big deal for me. However, I’d be pretty annoyed as a regular consumer.

Taking off my techie hat for a moment, it doesn’t make any sense. I can buy RT, a “lite” version of Windows, and get a free version of Office. Alternatively, I can pay $300 more for the Pro version that gives me less capability out of the box. If I want Office, I have to pay even more.

Now, putting my techie hat back on, I understand that the Pro version gives you a full version of Windows, and that’s what you’re paying for. You can install your old copy of Office 2003 or download a free alternative like OpenOffice.org–something you can’t do with Surface RT. If Microsoft didn’t provide Office with RT, there would be no offline alternative available, and that would be a huge problem. That said, it still seems like a silly move to me. Think about every feature graph you’ve ever seen that compares different versions of a product: the checkmarks usually don’t disappear as you move to the “advanced” versions.

The reality is that Surface Pro is intended for business and professional users. The Home & Student version of Office that ships with Surface RT probably isn’t sufficient for them, and they’d likely upgrade to a better version of Office anyway. So why not throw the average Joe User a bone and include a free version of Office? Is this just another example of corporate greed? It sure feels like it.

On a lighter, less-ranty note, I’m very curious to see how Surface Pro users will really use their tablets. I convinced myself that the things that appealed to me with Pro weren’t realistic uses. I’d love to install Visual Studio, but I’m not going to sit down and develop applications on it. I’d like to install Photoshop, but I’m not going to be editing graphics on it. Or… maybe I would, and I’m just trying to convince myself that I hastily made the right decision by opting to not wait for Surface Pro. I’m also interested in the difference in battery life, as that was another key justification in my decision to go with RT. I think I have a few co-workers that are ready to pull the trigger on a Pro as soon as it’s available, so I should have answers to these questions soon.

Serif DrawPlus

I’ve used an old version of Photoshop for all of my crappy, makeshift graphic needs over the past decade. It’s served me well, but I’ve been wanting to get something newer for a while. More specifically, I wanted a vector drawing application. I’ve loved Photoshop, making Illustrator the obvious choice, but I’m not interested in the $600 price tag. Adobe does have more attractive, subscription-based pricing options, but I’d rather pay once and be done with it.

With that in mind, I headed to the internet to explore options. I found DrawPlus from Serif, which has a free-to-use Starter Edition. I downloaded it to check it out and liked it very much. What I liked most was the How To tab that changes as you select different tools. I’m far from a graphics pro, and I usually stick to the tools that I know. This gives me the ability to learn new tools on the fly. Serif also offers many online tutorials, which is nice.

DrawPlus_HowTo

I was trying this out last week, and Serif was running a sale where one-version-old editions were 90% off. I scooped up DrawPlus 4 for just $15, and I’m glad I did. From what I can tell, this sale is no longer in effect, but it might be worth checking in from time to time. Or maybe it’s a year-end offering that they do–who knows? You can get the latest version for $99, or you can just stick with the feature-limited Starter Edition if you have basic needs.

Oh, and if you’re looking for a less expensive alternative to Photoshop, Serif offers a product called PhotoPlus. It also offers a free Starter Edition that I’m likely to check out. Based on my limited experience with their software and my basic photo and graphics editing skill, I’m a fan of Serif!

Peer Review Early, Peer Review Often

Another one of my never-ending flow of initiatives from the past year or so has been encouraging and promoting peer code reviews. This started as an extremely bare-bones process. Before committing a change to source control, a developer would grab a peer and simply walk them through their changes. My thought here was to get a second set of eyes on the code to catch obvious problems and force the developer to review their changes to avoid silly mistakes (e.g., Oops, I left my email address hard-coded in there!)

The next phase of this agenda would be to introduce a more formal code review. Grab a conference room and a peer or two, and walk them through the solution. As a reviewer, it’s hard to understand what’s going on by looking at a single file in a single changeset. You get no context about what’s trying to be accomplished. I can tell you if you miss a null check, but I’m less likely to notice if you’ve violated design principles without knowing much about the bigger picture. I’m still working on how to make this part of the process work, but that’s not what I’m writing about today.

I was watching a webcast about peer review, and it opened my eyes to a huge flaw with both of the processes introduced above: they occur after development is complete. We all know that the earlier a problem is identified, the less costly it is to fix. By waiting until the end of development to do a peer review, we’re ensuring that problems identified by peers will be as costly as possible. Peer reviews can also identify optimizations that could save development time. Refactor that function into a common place so that each class isn’t re-implementing it the same way. Boom, you just saved all the time that would’ve been spent re-implementing that function in subsequent classes. Peer reviews often result in a list of items that need refactoring. If you wait until the end, you’ll have a bigger list that takes more time that is ultimately less likely to be completed.

PeerReview

In addition to the time-saving opportunities, there’s also a human element that comes into play. When I think I’m done with a project, the last thing I want to hear from my peers is everything that’s wrong with what I came up with. In that scenario, I’m more likely to be resistant and defensive to suggestions. You think I should take all this stuff out of those TWENTY classes and move it into a base class? We’re deploying next week–there’s no way we can do that!

Additionally, the presenter in the webcast, Karl Wiegers, pointed out that peer review isn’t–and shouldn’t be–limited to code. The earlier you catch an error, the less costly it is to fix. So why not spend time peer reviewing the requirements document? A missed requirement that comes up late in the development process is usually a mess. Utilizing peer review on the requirements document might save some headaches later. As a bonus, you also introduce the requirements writing process to your peers, potentially adding depth to the team. This helps strengthen the team’s bench because even junior members can become familiar with the process of defining requirements before it becomes necessary for them to do so. The team also benefits from being more familiar with what’s in the pipe, which makes starting new development less of a telephone game.

The final message that I want to leave you with is exactly what the title suggests: peer review early and often. Review requirements documents and they’ll become more consistent, better defined, and knowledge will be shared between team members. Peer review code before it’s done to identify optimizations that can save development time and reduce the need for refactoring later. Peer review designs and test plans to improve quality and grow the team.