Category Archives: Dev

Blogging Makes You A Better Developer

Throughout my career, I’ve found that I’d figure things out, and several months or years later, I’d bump into someone else that was trying to do something similar, and I’d think or say: “Yea, I can’t remember the specifics, but I definitely remember dealing with that.” That was the original motivation for starting my blog: to build a personal catalog of things I’ve learned that I could refer back to should the things become relevant again.

It started as a bit of a hobby but eventually became part of my weekly professional life. I’d research my assignments, and I’d track the journey. Once I finished, I’d organize my list of steps into an article and publish it. The more I wrote, the more views my blog would see, and it became exciting to watch traffic grow and track statistics.

I was reading an article about the Feynman Technique and realized that this is similar to how I approached writing.

  • Start with a concept: a problem I’m trying to solve for an assignment
  • Teach it to a toddler: figure it out/make it work
  • Identify gaps: are there things I don’t understand about the solution or aspects that I am not able to articulate?
  • Review & simplify: write an article that’s concise and easy to understand

I was having a lot of fun transforming my work into articles and watching daily traffic grow, but what I didn’t realize was that this activity was actually giving me tremendous professional growth in a number of ways.

Writing Skills

Writing is an extremely underrated skill for software developers. Between email, team chats, direct messages, and documentation, I’d venture to say I spend more time writing than I do coding. In the digital age, I’m often represented by the things I type much more than the things I say or do.

The more important part of writing is how it affects people around you, though. Good writing is essential to sharing and communicating your ideas. It allows you to explain concepts to your bosses, teammates, and clients/customers. Maybe it allows a colleague to understand your perspective and snap to your approach, or maybe it helps them explain back to you where you’re wrong. It might enable you to implement a cool new feature or prevent the team from wasting time on something that’s not needed. Good things happen when you articulate thoughts effectively.

Technical Expertise

This is the big advantage that I didn’t realize until more recently. I thought I was learning things and just documenting them by writing articles, but the exercise of sharing it with the world forced me to take extra steps. I needed to fact-check claims I was making and double-check assertions. This correlates to the “identify gaps” step of the Feynman Technique. The act of publishing things I thought I knew forced me to make sure I actually knew them as best I could.

Similarly, knowing things a little better and having “practiced” explaining it makes you an excellent resource for your team. You know more things, you know them well, and you know how to explain them.

Shareable/Referenceable Content

My original goal! It’s awesome when somebody needs something, and you can just give them a link instead of explaining. It’s also fun when a co-worker’s researching something and they find you, which has happened a couple times. Sometimes you even turn up as an answer to your own questions!

Personal Brand

This is particularly good advice for people early in their career. If I get a resume from somebody with a blog that’s been active for several years, I get excited. This is someone that values learning and communication. This is someone that wants to share ideas and help others. This is someone that I think can provide a lift to the team.

Of course, if I look at the blog and it’s poorly written or has a lot of mistakes, it might work against you. This is where it’s important to follow the process. Start with your idea. Get it written. Fill the gaps. Simplify. With diligence and time, you’ll have lots of great content that represents you well.

Advertisements

Database-First Entity Framework in .NET Core

.NET Core and Entity Framework make connecting to an existing database really easy. This post will demonstrate how to generate models for an existing database using .NET Core Entity Framework. Note that you must have the .NET Core SDK installed.

First, let’s create a new console application. Open a new terminal and run the following commands:

/> dotnet new console -o MyConsoleApp
/> cd MyConsoleApp

If your app is using ASP.NET Core 2.1+, you’ll get all the Entity Framework packages you need. However, our console app is using .NET Core (not ASP.NET Core), so we need to install a couple more packages. Run the following commands:

/> dotnet add package Microsoft.EntityFrameworkCore.Design
/> dotnet add package Microsoft.EntityFrameworkCore.SqlServer

Now we’re ready to create our models. Again, we’ll do this by running a command in the terminal. Run the following command to generate models for all tables in your database along with a DbContext.

/> dotnet ef dbcontext scaffold "<connection string>" Microsoft.EntityFrameworkCore.SqlServer -o Models

Need help obtaining the connection string? If you’re using SQL Azure, you can browse to the database in the Azure Portal, and click the Connection Strings link.

With your DbContext and models created, you should now be able to write code to access tables. Add the following lines to your console app’s Program.cs:

using System;
using System.Linq;
using MyConsoleApp.Models;

namespace MyConsoleApp
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");

            using (var context = new YourDatabaseNameContext())
            {
                var foo = context.SomeTable.First();
            }
        }
    }
}

One-Step React App with .NET Core Web API Backend

Last week, I wrote a short article demonstrating how to create a React app using Create React App and hook it up to a .NET Core Web API created by dotnet new webapi. Dotnet new also has a react template, though, which allows you to create a new React app with .NET Core web API backend in a single step.

/> dotnet new react -o my-app
/> cd my-app
/> dotnet run

Note! The front-end client and backend API still run as separate servers, so you must still specify the proxy in the React app. Perform the following steps to do this:

  1. Open package.json in the /ClientApp folder
  2. Add "proxy":"https://localhost:5001"
  3. Save & restart your app
  4. Verify API communication by viewing the Fetch Data page

There are some differences between the two approaches. Create React App creates a better-looking but more-barebones site. Dotnet new starts you with a more complex React site with basic navigation and multiple pages.

Create a React App with .NET Core Web API Backend

React and .NET Core Web API both provide super-simple tools that let you magically spit out a fully-functional application by executing a single command. Making the two work together is pretty easy, but there are a couple not-so-obvious steps that you need to take that aren’t explained in the 1-step, just-run-this-command getting-started guides you’ll come across. This article will walk you through the steps of creating new React and .NET Core Web API applications, then modifying the React app to make an API request and display the result.

Create the React App

The easiest way to create a new React app from scratch is with Create React App. Run the npx command, and you’re done!

/> npx create-react-app my-app
/> cd my-app
/> npm start

Create the Web API

Creating a .NET Core ASP.Net web API is just as easy. Run the dotnet command, and you’ve got your API.

/> dotnet new webapi -o my-api
/> cd my-api
/> dotnet run

Specify Proxy

React app: check. .NET Core API: check. Now let’s hook ’em up. The out-of-the-box API comes with a /api/values endpoint, so let’s use that to retrieve and display values on our home page.

By default, the React app is going to run on http://localhost:3000 and the API on https://localhost:5001. Since our app and API are running on different servers, we must specify a proxy. Open package.json in the React app, and add the line shown below. Note that if your app is running, you’ll need to restart it for the setting to take effect.

{
  "name": "my-app",
  ...
  "proxy": "https://localhost:5001"
}

Make Request, Display Result

Now we’re ready to add the API request. Open your React app’s App.js file, found in the /src folder. Add a constructor that will execute the request, and modify render() to display the result.

class App extends Component {
  constructor(props) {
    super(props);
    this.state = { values: [] };

    fetch('/api/values')
      .then(response => response.json())
      .then(data => {
        this.setState({ values: data });
      });
  }

  render() {
    return (
      <div className="App">
        <header className="App-header">
          <img src={logo} className="App-logo" alt="logo" />
          Can I haz values? 
          <ul>
            {this.state.values.map((value, index) => <li key={index}>{value}</li>)}
          </ul>
        </header>
      </div>
    );
  }
}

All that’s left now is to run the projects. Start the API first, followed by the React app. The result should be something like this!

Full Hard Drive? Three Tools to Help Reclaim Space

My work laptop has been a free-hard-disk-space disaster for a while now, but I’ve mostly just been ignoring it. I really only use it when traveling, so it’s just not been much of a priority. I’ll empty the recycle bin or clear out my downloads folder to finish the tasks of the day. Not enough? Maybe I’ll run the Windows Disk Cleanup utility to get the temp files and whatnot. Those temporary measures are enough to get me through the day, but it’s really not enough. After hobbling along for probably more than a year, I finally did some real work to solve the problem, and I freed-up close to 100 GB using three tools.

IOBit Uninstaller

A good, simple way to free up big space quickly is to uninstall programs you don’t need, particularly if you’ve got some big ones (cough, cough… World of Warcraft). Occasionally when performing this activity, I’ll run into a program that can’t be uninstalled because its installer is missing or corrupt. I’ve dealt with this in the past by doing a manual uninstall–deleting files and hunting around in the registry. When dealing this this recently, I somehow stumbled onto IOBit Uninstaller. In addition to dealing with the uninstallable, it also let’s you uninstall multiple programs at once. That’s a pretty nice feature when you’re looking to free up some disk space. Just scroll through the list, select all the programs you want to remove, and let IOBit Uninstaller remove them one by one.

I’ve used it a couple times on my home and work computers, and I’m a fan. https://www.iobit.com/en/advanceduninstaller.php

WinDirStat

So you’ve taken care of the low-hanging fruit by uninstalling unused applications, but you still don’t have enough free space? It’s time to do a little investigative work to see what’s taking up all that space. I’ve used different tools in the past, but this time I took WinDirStat for a whirl. It did a fine job. I like that it gives you a tree-view with percentages that you can drill into as well as a visualization. In my case, I found 30 GB of files that had been uploaded to a synced OneDrive folder on my hard drive. In the past, I’ve also seen big, unused databases eating up tons of space.

https://www.fosshub.com/WinDirStat.html (official mirror)

My investigation also revealed that a fairly large amount of space was being taken up by C:\Windows\Installer, which brings me to my third and final tool…

WICleanup

I found WICleanup after doing some research about how to cleanup the Windows\Installer folder. I don’t really know how it works, but it “only deletes the unused files in the installer folder.” Sounds great, right? I tried it, and it seemed to work as advertised. I’m a little skeptical and concerned that I’ll hit problems later when trying to uninstall something, but hey–at least I’ve got IOBit to help me clean it up if I do.

Also, based on a tip from superuser.com, I ran WICleanup from the command line: WICleanerC.exe -s

http://appnee.com/wicleanup

Simple .DistinctBy Extension

LINQ’s Distinct extension has largely been a disappointment to me. Sure, it’s nice when I’m working with a collection of integers, but more often than not, I’m working with a collection of objects and don’t have an IEqualityComparer<TSource> available to me. I know I could just create one, but I just want to use a lambda like just about everything else I do with LINQ!

To the internet!, right? I learned I could use the following trick to accomplish what I want:

collection
  .GroupBy(x => x.key)
  .Select(x => x.First());

Works like a charm, but I got tired of dot-GroupBy-dot-Select-ing and adding a comment about what I was doing for future maintainers, and I think it’s a lot better to just chuck it into an extension method.

public static IEnumerable<TSource> DistinctBy<TSource, TKey>(
    this IEnumerable<TSource> source, 
    Func<TSource, TKey> keySelector
{
    return
        source
            ?.GroupBy(keySelector)
            .Select(grp => grp.First());
}

Ahh, nice! Alternatively, could score this functionality by adding MoreLINQ to your project. On a neat side-note, you can also cherry-pick which MoreLINQ functionality you want by installing individual packages.

The Way of the Ninject

In recent months, I’ve come to be a big fan of Ninject. I’ve used Microsoft’s Unity Container and Object Builder in the past, but most of what I’d done previously just involved exposing dependencies as properties with lazily-loaded default implementations. I really dig Ninject because it’s so lightweight and easy to use, and it integrates really well with mocking frameworks like Rhino Mocks and Moq.

Getting started with Ninject is really easy and accomplished in just a few steps:

  1. Install the NuGet package
    Install-Package Ninject
  2. Create a module
  3. Create a kernel
  4. Get objects from the kernel

Let’s look at an example. Assume we have the following interfaces and classes.

public interface IFoo {
    void Run();
}

public class Foo : IFoo {
    private readonly IBar _bar;

    public Foo(IBar bar) {
        _bar = bar;
    }

    public void Run() {
        _bar.Print();
    }
}

public interface IBar {
    void Print();
}

public class Bar : IBar {
    public void Print() {
        Console.WriteLine("Yay!");
    }
}

We can create a NinjectModule to create an instance of IFoo like this.

public class FooModule : NinjectModule {
    public override void Load() {
        Bind<IFoo>().To<Foo>();
        Bind<IBar>().To<Bar>();
    }
}

Now, we need to tell our Ninject kernel to use our new module.

IKernel kernel = new StandardKernel(
    new FooModule());

And, finally, we use the kernel to request the objects we need. Note that Ninject does the work of figuring out the default implementation of IFoo (Foo) has a single constructor that accepts a dependency, IBar, and that the default implementation of the dependency is Bar.

class Program {
    static void Main(string[] args) {
        IKernel kernel = new StandardKernel(
            new FooModule());
        IFoo foo = kernel.Get<IFoo>();
        foo.Run();
        Console.ReadLine();
    }
}

Output:

Yay!

Case-Sensitive File Paths on Git for Windows: Stop Changing the Capitalization of m’Dang Branches

Git’s been a part of my daily business for a little more than a year now, and I ran into what can only be described as a shenanigan shortly after I started. The first feature branch I ever created was named something like Adam/a-feature. I did some work and merged it into master. Yay. Then it was time to work on a new feature, so I created another feature branch. This time, however, I decided that I wanted to use a lowercase “adam” as the branch prefix, something like adam/another-feature. Seems okay enough, right? Not so fast, my friend.

I was creating these branches in Bitbucket and syncing them locally with SourceTree. My new branch, adam/another-feature came down as expected, and I was able to do my work. Something weird would happen when I pushed my changes to the remote branch, though. SourceTree would report success, but it would indicate that I still had changes that needed to be pushed. Adding to my confusion, I could see that there were now two branches in Bitbucket: adam/another-feature and Adam/another-feature! What gives?

Well, it turns out this is due the the case-insensitivity of Windows. Branches are stored as files within the .git directory, and creating a new branch will create a file in the .git/refs/heads directory. So when I created my first branch, Adam/some-feature, it created the folder .git/refs/heads/Adam. Then, when I created my second branch, adam/another-feature, Git found and used the existing folder, .git/refs/heads/Adam, and used that.

Long story short, if you wish to change your capitalization scheme for branch prefixes in Git for Windows after you’ve already used a prefix with a different scheme, head on over to .git/refs/heads and make the change there!

Tracking Commits Across Branches with Git and SourceTree

When it comes to Git, SourceTree is definitely my tool of choice. However, I was surprised to find that there doesn’t appear to be any sort of built-in commit tracking to see which branches do and don’t contain a commit, similar to changeset tracking in Visual Studio. Now, that said, it’s pretty easy to do with Git, there’s just nothing that I could find baked into the SourceTree UI (Am I wrong? Let me know!).

So, if I need to do this, I click theĀ Terminal button in SourceTree and run one of the following commands:

git branch --contains 
git branch -r --contains 
git branch -a --contains 

The -r and -a parameters can be used to check just Remote or All (local+remote) branches.

Now, SourceTree may not have this functionality built-in, but it can be added easily with a Custom Action. Here’s how you can create a custom action to track a commit across branches.

  1. In SourceTree (Windows, 1.6.21.0), go to Tools > Options and select the Custom Actions tab
  2. Click the Add button to create a new custom action
  3. Enter a caption, and select the option to Show Full Output; for Script to run, enter the path to git.exe; and enter the parameters (note the use of $SHA)
    create-custom-action
  4. Click OK to save that bad boy

With the custom action created, you can run it by right-clicking a commit and choosing Custom Actions > Track in Remote Branches.
context-menu

If you selected the option to show full output, the branches containing the commit will be listed in SourceTree.
output