John Reese

11 posts
Hacktoberfest is Coming!

Hacktoberfest is Coming!

photo from hacktoberfest.digitalocean.com

It's that time again! For those who don't know, Hacktoberfest is put on by DigitalOcean and GitHub as a means to celebrate open source software and to get more people involved in contributing to open source projects.

Getting Involved

So how can you get involved? Well, I'm going to be a little bias and suggest the Exercism C# repository. For this event, we have gone through and added a hacktoberfest label to issues that we felt would be perfect for Hacktoberfest. If you don't feel like working on any of those issues, no problem! We're always welcoming new ideas and/or things that we've simply missed. Just create an issue, and let's have a discussion about it.

Not interested in programming? Contributions do not have to be programming related. There are quite a few open issues that relate to documentation or DevOps. Whatever fits your skill sets, we'll find something for you.

Of course, we are just one repository out of many. Exercism itself has a variety of repositories to choose from, but any project that you can find on GitHub is free game.

If you're struggling to find projects to work on, there's an application which you can find here that lists projects that are looking for assistance during Hacktoberfest.

Did I Mention Free Stuff?

That's right! If you submit four pull requests during the event, Digital Ocean will hook you up with a free T-Shirt (the request does not even need to be merged). As long as the request was made between Oct 1 and Oct 31, it counts.

So make sure you sign up at their website and start making some contributions!

Red, Green, Refactor and the Transformation Priority Premise

Recently I stumbled across a test driven development article that mentioned something I had not heard before. It's a premise that Uncle Bob came up with as a means to order the priority of the transformations you should apply when practicing test driven development. He called it the Transformation Priority Premise

I wrote a couple small programs using the premise, and really liked the concept he was trying to convey. Though in order to fully explain the premise, we should probably talk about test driven development itself.

So.. what is TDD?

Test Driven Development

TDD is a software development methodology that has three "laws" set forth by none other than Uncle Bob. They are as follows:

  1. You are not allowed to write any production code unless it is to make a failing unit test pass.
  2. You are not allowed to write any more of a unit test than is sufficient to fail, and compilation failures are failures.
  3. You are not allowed to write any more production code than is sufficient to pass the one failing unit test.

These three laws, if adhered to, force you into a cycle commonly referred to as red, green, refactor. Let's demonstrate cycle by writing our own program. First, we'll need some requirements.

This program will return four different responses, and the response will be based on what kind of sentence is used as input.

  1. If the input ends with a period, return "Ok."
  2. If the input ends with a question mark, return "No."
  3. If the input ends with an exclamation mark, return "Quiet!"
  4. If no punctuation is found, return "What?"

This program is based on an exercism exercise called Bob, which is actually based off of another exercise, Deaf Grandma.

So where to start? The test.

Before we write any production code (the code that will ultimately end up into the compiled binary) we need to first stand up a unit test. To start, we'll need to create our System Under Test (SUT).

[TestMethod]
public void Input_has_no_punctuation_response_says_what()  
{
    var sut = new Responder();
}

And not all that surprising, the compiler is already yelling at us.

The type or namespace name 'Responder' could not be found (are you missing a using directive or an assembly reference?)  

But that's ok! We're already abiding by the first law since we started with a unit test. The compilation error is also expected; the second law states that we can't write any more of the unit test than is sufficient to fail (and compilation errors are failures).

So let's switch context a little bit and start writing some production code.

public class Responder  
{
}

We're done! The unit test compiles and passes. The third law forbids us from writing any more production code.

At this point of our development cycle we have gone through red (unit test compilation error), green (adding the Responder class to make the test pass), and now we're onto refactoring. Heh well, in this case, there's not really anything we can refactor, so we can move on.

With one cycle completed, we start from the beginning again with red. Just like last time, we need to write some more code in our test case so that it fails.

We'll want a method on the Responder that can take an input of type string, and we know our first requirement is that if no punctuation is found the result of the method is "What?"

[TestMethod]
public void Input_has_no_punctuation_response_says_what()  
{
    var _sut = new Responder();
    Assert.AreEqual("What?", _sut.Response("Hello"));
}

Now we can go ahead and compile that...

'Responder' does not contain a definition for 'Response' and no extension method 'Response' accepting a first argument of type 'Responder' could be found (are you missing a using directive or an assembly reference?)  

Another compiler error. Let's go ahead and fix that up.

We know the compiler error stems from the fact that we never implemented a Response method on the Responder class, so that's pretty easy to implement. But what do we write inside of the method body? The answer may seem a little surprising.

public string Response(string input)  
{
    return "What?";
}

That's right. A constant string value of "What?". Once again, this is because of the third law. We cannot write any more production code than is sufficient to pass the one failing unit test. It may seem a little silly at first, but bear with me, it'll hopefully make a little more sense as we continue writing our program.

Alright, so we've tested the case of no punctuation. Let's move onto a case that includes punctuation, the period. Testing for that gives us a unit test that looks like this:

[TestMethod]
public void Input_is_statement_response_says_ok()  
{
    Assert.AreEqual("Ok.", _sut.Response("Do it now."));
}

Continuing with the red, green, refactor cycle, we now have a failing test. Let's go ahead and write the bare minimum implementation.

public string Response(string input)  
{
    if(input.EndsWith("."))
    {
        return "Ok.";
    }

    return "What?";
    }
}

Easy enough, time for another test.

[TestMethod]
public void Input_is_question_response_says_no()  
{
    Assert.AreEqual("No.", _sut.Response("Please?"));
}

Next up? You've got it. Let's make this test pass.

public string Response(string input)  
{
    if(input.EndsWith("."))
    {
        return "Ok.";
    }

    if (input.EndsWith("?"))
    {
        return "No.";
    }

    return "What?";
}

Now, when we make this test pass, we can see that there is some code duplication going on that we should probably refactor. After all, after making a test pass, we are given the opportunity to refactor the code. Unfortunately, it may not always be clear on how to refactor it. There is hope, however!

The Transformation Priority Premise

As stated in the introduction, The Transformation Priority Premise (TPP) is a premise that was put together as a means to prioritize the transformations that occur when getting unit tests to pass.

When you're practicing TDD you may ask: "How doesn't all code produced by using TDD, just result in code that is specifically tailored to pass the tests?"

You might notice that we're starting to see that a little in our current program. As it stands right now, we have one conditional per unit test. There's really nothing to stop this trend from occurring. There is, however, another little mantra that goes with TDD that pushes developers away from this practice.

“As the tests get more specific, the code gets more generic.”

Put another way: As we add more tests to our system (become more specific), our code becomes more generic (agnostic to the input).

With this in mind, it should be a little clearer to see that our current approach may not be the best one that we can take to solve this problem. We're just introducing more and more if statements to make the tests pass. Let's take a stab at refactoring our code and get away from our potential mountain of conditionals.

To start, the root of the TPP is its list of transformations and their priority. Here is the full list:

  1. ({}–>nil) no code at all->code that employs nil
  2. (nil->constant)
  3. (constant->constant+) a simple constant to a more complex constant
  4. (constant->scalar) replacing a constant with a variable or an argument
  5. (statement->statements) adding more unconditional statements.
  6. (unconditional->if) splitting the execution path
  7. (scalar->array)
  8. (array->container)
  9. (statement->recursion)
  10. (if->while)
  11. (expression->function) replacing an expression with a function or algorithm
  12. (variable->assignment) replacing the value of a variable.

.. and In case you've forgotten, this is the code we're trying to refactor.

public string Response(string input)  
{
    if(input.EndsWith("."))
    {
        return "Ok.";
    }

    if (input.EndsWith("?"))
    {
        return "No.";
    }

    return "What?";
}

Now we want to refactor this in order to get rid of the duplication. We started with a single constant, "What?" which was priority #3 and moved onto splitting the execution path, #6. It's time to consult the list and see what transformations we can make in order to clean up the if statements.

Being at #6 currently, the next logical step would be to take a look at #7, scalar to array. That could probably work, but given the context of this problem, we know it's a mapping issue. We're mapping punctuation to results. So let's take it one step further and leverage #8, array to container.

Note: The difference between an array and container is that an array is generally going to be a primitive array (think int[], string[], etc). Whereas a container is going to be something like a List, Set, or Dictionary.

Using scalar to array, and then array to container, we get a refactored method that looks like this:

public string Response(string input)  
{
    var inputResponses = new Dictionary<char, string>()
    {
        { '.', "Ok." },
        { '?', "No." }
    };

    if (inputResponses.ContainsKey(input.Last()))
    {
        return inputResponses[input.Last()];
    }

    return "What?";
}

That's pretty neat. No more repeating if statements. Recompile, ensure the tests still pass.. and they do! Now, there's only one punctuation mark that remains in our requirements, and that's the exclamation mark. We just finished refactoring, so we start again from red and introduce our last test:

[TestMethod]
public void Input_is_yelling_response_says_quiet()  
{
    Assert.AreEqual("Quiet!", _sut.Response("Woot!"));
}

Going back to our production code, it should be pretty straight forward as to how we can get this test to pass.

public string Response(string input)  
{
    var inputResponses = new Dictionary<char, string>()
    {
        { '.', "Ok." },
        { '?', "No." },
        { '!', "Quiet!" }
    };

    if (inputResponses.ContainsKey(input.Last()))
    {
        return inputResponses[input.Last()];
    }

    return "What?";
}

That's all there is to it! All of our tests pass, and we've met all of the requirements that we set out to meet.

The gain from leveraging the TPP is that it keeps us grounded, and forces us to continue to take baby steps when developing code. We generally do not want to take giant leaps forward. Start with repeating if statements over and over until something like a dictionary or a loop pops out at you.

If you're interested in learning more about all of the nuances of the Transformation Priority Premise, I highly recommend checking out Uncle Bob's initial blog post on the subject which can be found here.

Microsoft's June 2017 Security Rollup Woes

Let me set the scene for you.

It's the morning of June 14th, 6:30am to be exact, and we're going to make a standards deployment. You know how I know? Because it's Wednesday. And Wednesday morning is the morning that we usually make standards deployments. Monday is just too early in the week. Tuesday, well, we just don't like Tuesdays, but Wednesday morning we make sweet, standards deployments.

Business Time

On the docket that morning were some changes that impacted how our dialog boxes behaved as well as printing throughout the entire system.

All of the scheduled changes had already gone through internal and automated testing. The necessary test records were applied to the ticket, we were ready to rock and roll.

My team and I pushed the deployment button, and not 15 minutes later, we received a call from one of our customer care representatives that printing was failing for a single customer. Then another, and another. To play it safe, we decided to simply roll back the change and ask questions later. Unfortunately.. it did not fix the problem.

The Problem

We continue to investigate potential causes and eventually made the realization that the issue only occurred on Windows machines that had consumed the latest Windows updates. Specifically the June 2017 security roll up.

After spinning up a VM with Windows 7 and the latest updates, we're in business. We notice that printing from an iframe results in a blank page. Fiddler and the developer tools reveal it's actually an HTTP status code of 404.

See, when printing an iframe (or from within an iframe) IE apparently takes the contents of the iframe and downloads it locally. This downloaded iframe is then used as the source for the iframe and then it is printed from that copy. However, it appears that this security change was now referencing the iframe on the web server.

For example

<iframe src="myframe.html" />  

Became

<iframe src="ABC123.html" />  

Where ABC123.html is just some auto-generated name for the locally downloaded iframe. The problem with this is that obviously ABC123.html does not exist on our web server. So the iframe was loading a page that did not exist, and thus, printing a blank page.

The Solution

Luckily the community banded together. It gave us confidence that this was not a localized issue and that other 3rd parties were experiencing the same pains. Progress!

Everyone in the community was pretty sold on the same workaround. Instead of using window.print() use document.execCommand('print'). So we decided to implement this change in the core areas of the system that print from within an iframe (which about covers 99% of the system).

We deploy the change to our quality environment where its tested by our internal teams and customers a like. Everything passes testing, everyone seems pretty happy with the results, and we push the change to our Production environment the following week.

Everything is going smoothly, we're back up and running! Except for one area of the system... our check printing module.

Now, checks unfortunately fell within that 1%. The part of the system that wasn't covered by our change. Which we immediately thought wasn't a big deal, we can just adjust how checks are printed using the same method. How wrong we were..

Checks are a special flower within our system and have their own design. Completely separate from how other areas of the system (reports, labels, etc) print. I won't get into all of the technical details of how they were implemented, but it essentially meant that the only way to resolve the issue was to fix the underlying HTML.

After a couple days of tweaking HTML and CSS, pixel by pixel. Comparing old checks to the new implementation, everything looked good for launch. The finish line was right in front of us.

Conclusion

But of course, the morning of the final deployment.. Microsoft released a hotfix.

It's honestly great news. That's the best outcome we could have hoped for. A timely fix from the original offender. It just could've come a little sooner...

That's the game you have to play though. Software that you depend on breaks, which in turn, breaks your customers. You only have a couple options. You can do nothing and hope that the offender will release a hotfix in a timely manner, or you can make strides to develop a workaround.

We immediately took the latter approach, as it's the safest and quickest way to get our customers back up and running. Who knows when Microsoft would've released a fix. It could have never come for all we know.

As they say.. better late than never.

Coding Standards Should Not Matter

Yeah that's right. That behemoth of a document that you call your standards document just does not matter. Now that's a pretty bold statement, so let me explain.

I believe that a standards document should be about how to style code. It should be about where to place curly braces, it should answer the great spaces vs. tabs debate (even though there is obviously only ever one clear choice), and similar concerns. What a standards document should not contain is substance, best practices. So what is a standard? A best practice?

It's very likely that the definition of a standard or a best practice vary from individual to individual, so to get everyone on the same page, these are how I define standards and best practices.

  • Standard: A rule put in place by an authority for the purpose of consistency.
  • Best Practice: A proven approach, generally the best way to solve a problem.

The key would in the standards definition is consistency. It should not matter at all how the code is formatted as long as the team agrees to an approach and follows it. Ensuring that the coding style of the repository remains consistent.

Consistency is extremely important. Code should look like a team wrote it, and not any one individual. Everyone on the team is going to be sharing the same code base and the style should not vary from developer to developer. You definitely do not want developers as their first order of business when starting a new task to be to reformat the code to their own personal preference.

A best practice is more along the lines of do not wrap your entire class in a try-catch and swallow the exception to avoid generating errors or keep your collection setters private. While as important as a standard, if not more so, these types of statements should not exist within a standards document.

So why shouldn't a standards document contain both?

First and foremost, they are different concerns. We generally like to split up concerns into easily digestible bits and not combine them. It is not going to matter all that much if your team puts curly braces on the same lines as your conditionals. It is going to matter if you start piling all of your concerns into a single class.

However, from my experience, a standards document is rarely just standards. A standards document will generally combine standards and best practices.

Though I do not believe the solution is to split up these two concerns into different documents. No, I believe a standards document should not exist. Emphasis on document. We should have standards. Unfortunately the medium we choose to present our standards is generally the incorrect one.

So whats the better approach? Let your code be your standards document.

I believe that coding standards are better enforced through tooling and the code base itself. Tools such as ReSharper and StyleCop allow teams to define their standards and then have them be automatically enforced through the development environment. No need to spend time writing out paragraphs in English, comparing and contrasting different styling approaches (Do vs. Don't). No need to worry about document formatting. Simply define your rules and push it to the teams who you intend to consume the styling that was agreed upon.

Like the old saying goes, the best standards document is no standards document at all.

The Rules Pattern: How to Drop Your Guard

In your travels as a programmer, you will more than likely come across a body of code that looks a little something like the following:

public bool CheckSystem(Computer computer)  
{
    if (computer.Ghz < 3)
    {
        return false;
    }

    if (computer.Ram < 4)
    {
        return false;
    }

    if (computer.DiskSpace < 10)
    {
        return false;
    }

    return true;
}

Here we have a method called CheckSystem which tries to validate whether or not a model of a computer meets all of the system requirements. If the model fails to meet all of the specified requirements, the method will return false. It attempts to validate the model by using a number of different conditionals, one after the other. These types of conditionals are called Guard Clauses.

For a method that may only have a few conditions to check, guard clauses are completely acceptable. There's no need to over complicate things. However, if you find yourself with a large number of conditions to validate and/or you feel that the conditions will change over time, you may want to consider an alternative approach. Enter the Rules Pattern.

The Rules pattern is not a pattern that you'll see in the design patterns book by the Gang of Four, but could be considered an implementation of the Command pattern.

For the purpose of this blog post, we'll be implementing a set of rules to check if a computer meets all of the minimum requirements to run a given piece of software. Here's how it works.

First create an interface that all of your rules will inherit from.

public interface ISystemRequirementsRule  
{
    bool CheckRequirements(Computer computer);
};

The interface will only expose one method which will be used to validate the condition for the rule.

You will then need to create all of your rules. As stated previously, each of your rules will inherit from the same interface. Each rule will be one of your guard clauses.

class DiskSpaceRule : ISystemRequirementsRule  
{
    public bool CheckRequirements(Computer computer)
    {
        var ruleResult = computer.DiskSpace > 10;
        return ruleResult;
    }
}

This rule validates that the disk space of the computer exceeds 10. Ten what? You can associate whatever unit you want, doesn't matter in this case! Create as many of these rules as required to ensure all of your requirements are checked.

Next, you'll need to create a class whose responsibility is to run through and validate every rule that you have created. There are a couple approaches that I would recommend.

public class SystemRequirementsChecker  
{
    var _rules = new List<ISystemRequirementsRule>();

    public SystemRequirementsChecker()
    {
        _rules.Add(new DiskSpaceRule());
        _rules.Add(new RamRule());
    }

    public decimal CheckSystem(Computer computer)
    {
        foreach (var rule in _rules)
        {
            if(!rule.CheckRequirements(computer))
            {
                return false;
            }        
        }

        return true;
    }
}

This approach simply holds all of the available rules in a collection. When you want to validate all of your rules, simply call the CheckSystem method. This method will then iterate through all of the rules that you have defined in the constructor.

If you want an approach that fully embraced the Open/Closed Principle then I would recommend something similar to the following:

public class SystemRequirementsChecker  
{
    private readonly IEnumerable<ISystemRequirementsRule> _rules;

    public SystemRequirementsChecker()
    {
        _rules = GetRules();
    }

    public bool CheckSystem(Computer computer)
    {
        return _rules.All(r => r.CheckRequirements(computer));
    }

    private IEnumerable<ISystemRequirementsRule> GetRules()
    {
        var currentAssembly = GetType().GetTypeInfo().Assembly;
        var requirementRules = currentAssembly
                .DefinedTypes
                .Where(type => type.ImplementedInterfaces.Any(i => i == typeof(ISystemRequirementsRule)))
                .Select(type => (ISystemRequirementsRule)Activator.CreateInstance(type))
                .ToList();

        return requirementRules;
    }
}

This approach uses reflection to find all classes that inherit from the ISystemRequirementsRule interface (all of the rules that you will have written). It then takes all of these rules, instantiates them via CreateInstance, and returns them as a list so that the CheckSystem can iterate through them and validate each and every rule.

The reflection approach allows you to create a new rule with the expected interface, rebuild the application, and that's it. Your new rule will be enforced inside of the SystemRequirementsChecker. No need to touch any of the pre-existing source code!

The former approach, utilizing a list and instantiating each rule in the constructor would require you to create the class and then modify the SystemRequirementsChecker to include your new rule before it knew about its existence.

I'd consider either approach acceptable, it depends on your situation.

So there you have it, the Rules Pattern. A useful pattern that you typically don't run across when studying design patterns. Hope it helps!

Who Moved My Cheese?!

Earlier this morning, I was reminded of an incident that I had with a customer a couple years ago. It went a little something like this:

I was working on a web application. It consisted of a tabular grid that displayed information based on search criteria. Very similar to anything you would see on any sort of e-commerce website that allowed you to filter products by price range, name, etc. The customer wanted to enhance this specific grid. They wanted a new filter that would give them the ability to hide all records that were considered inactive. A fairly reasonable request.

Okay great, requirements established. I began on my merry way.

Now, this sort of request is relatively trivial in our environment. We need to add a new UI control to the filter section and then wire it up to the backing stored procedure. The whole process probably took around two hours from opening the development environment to pushing it out into our production environment. Once deployed to production, we inform the customer that their feature request is now live, they sign off on the change, and are as happy as could be.

Alt text

...I receive a notification that the customer is having difficulty filtering results within the grid, because a filter that used to be there no longer is.

Now, this struck me as odd, because I actually added a filter. So I opened up the application myself to see if I could validate the customers claim. I was not able to do so. Everything looked exactly as I would expect, and nothing changed from when the customer had previously signed off on it.

In an effort to get the potential miscommunication resolved swiftly, I hopped on a conference call with the customer to see if we could get to the bottom of this. We had already done some back and forth on the ticket itself, so the customer (lets call him Mark) and I had already established a little bit of a rapport. The conversation went something along the lines of:

John: Hey Mark, It's John. So show me the screen you're looking at. I'm seeing the filter on me end.
Mark: See here? The filter isn't on the top left.
John: Yeah we added a new filter yesterday, so the filter that you're looking for is right next to it.
Mark: Oh, I see it now. Yeah, that's not going to work. We would have to retrain all of our employees.
John: Why is that?
Mark: Our documentation on how to use this screen in our facility includes screenshots of the filters. It explicitly states which filters to use and where they are on the screen. We would have to print out entirely new documentation for our facility and re-train all of our employees.

This immediately got me thinking about a story I had read some time ago called Who Moved My Cheese? I'll let you read the Wikipedia entry if you are so inclined, but it essentially speaks to how people have a hard time adjusting to change. Now, for me, this came out of nowhere. Never would I have guessed that the customer had such rigid documentation of how to use our software.

The software that I develop and maintain changes many times per days. Sometimes we push over a hundred changes to the system in a given day. It's fluid, changing constantly. So to think that a user would assume our layout had no potential for change and would revolve their whole training philosophy around something that had the potential to change is a little boggling. On top of all of this, it was the customer who requested the change, just a different department.

I ended up getting the situation resolved by getting the other department involved and explaining the situation. The change was going to stick, there really wasn't a way around it.

So here we had a feature request directly from the customer, the change approved in our quality environment, and then verified on our production environment. Only to be followed up with an urgent message from the same customer stating that their workforce was slowed because the training was no longer valid after the change was made.

The whole situation really made me take a step back and realize that you can never be completely confident in any change you make to an existing system.

A Comment on Comments

Full disclosure, I really don't like comments.

Like any feature of a language, I believe they have their purpose. After all, they were put in the language in the first place. I've just seen a lot of incorrect uses of comments, and the more I run into them, the more I feel like they don't add a whole lot of value.

Lets dive into some of the reasons why.

Comments bloat classes

I see auto generated comments everywhere. They plague a large portion of our code base. They have good intentions, but often end up making the file larger and harder to understand. Especially when the developer pays no mind to the content that was generated.

public class Cube  
{
    /**
    * The height of the cube.
    */
    public Height { get; set; }

    /**
    * The width of the cube.
    */
    public Width { get; set; }

    /**
    * The depth of the cube.
    */
    public Depth { get; set; }
}

We've turned a simple model of a cube which could be a few lines long, into a bloated model with more lines of comments than of actual useful code. Everything that the comments are describing are self evident. The comment headers just aren't needed. In all honesty, if I see something similar to the above, I just go ahead and delete the comments.

Comments are code duplication

One of my favorite light bulb moments. We as developers like to avoid code duplication, reuse is king. If you take a step back and think about comments in this regard, that's exactly what they are.

Code explains what is going to be done. Comments explain what is going to be done. They just do so in different languages. Just like with code duplication, if you change a line of code, you best remember to also change the comment lest you want to be left with:

/**
* ...
<summary>Returns true when the test succeeds; false otherwise.</summary>  
*/
public TestResult RunTest()  
{

You can probably guess that this method used to be a bool, and it was decided that the returned result needed to be a little more verbose. So the developer went ahead and created a TestResult object to store some more information about the result of the test. However, they forgot to update the comment, and now it's out of sync

Comments can be a sign of a bigger problem

Even if you believe you are using comments as intended, to explain potentially confusing code, a better solution is to just make the code less confusing.

if(!(x > 10 && y < 5 && z == -1)) // ensure the object is not out of bounds  
bool isOutOfBounds = (x > 10 && y < 5 && z == 0)  
if(isOutOfBounds) {  

You could even go a step further and assign each int its own const variable so the user isn't guessing what exactly 10, 5, and 0 even mean.

Finally.. I've seen my fair share of auto generated comments, and you know what?

Auto generated comments make me cry
/**
* Processes the async
*/
public async Task<Result> ProcessAsync()  

Oh.. I had no idea.

Anyway, after all this comment bashing, lets talk about a couple cases that I believe to be a strong candidates for comment use.

Public APIs are a great use of comments. You are going to have a large body of people consuming your code, calling your methods. It can be greatly helpful if everything is well documented, explaining exactly how everything is supposed to work.

Another valid use for comments is explaining why code is present. I like to think that code explains how it works, where comments explain why it works. Code can only make so much sense, regardless of how you name your variables, how well you format it, etc. Sometimes, there exists a scenario that just needs a little more explanation.

/* Reflection may seem like an odd choice here, but after testing the use case
* scenarios it proved to be much faster than splitting everything up into database
* calls */

Like I said in the beginning. Comments were put into languages for a reason, they have their place. Please, just use them sparingly.

IIS Debugging Roulette

It's a typical day in the office. You've spent your day writing top notch unit tests and elegant code. You sit back, marvel at what you have accomplished and decide, being that good developer that you are, that maybe you should do some functional testing.

You navigate to the Debug pane, click your trusty "Attach to Process" and this familiar window pops up:

"Here we go again" you say to yourself, dreading what is to come next. You click on the first w3wp.exe process and attach. Nope.. doesn't look like that worked. You repeat this until you finally find the correct process to attach to so you can test your code. You hate having to do this every time you want to debug, but you accept it as a necessary evil of debugging an IIS process. It's just how it has to be done, right?

IIS Manager to the Rescue

You may be relieved to hear that this isn't how it has to be, there is hope! If you navigate your IIS Manager, you should see an option called Worker Processes underneath the IIS section.

After choosing this option, you will be presented with the Worker Process feature, which describes all of the w3wp.exe processes in greater detail than the debugger window. It is here that you can confidently link the application pool that you're trying to attach to with the correct process identifier. No more gambling!

Keep Your Collection Setters Private

When exposing properties of a class, you may find yourself immediately exposing a public getter and a public setter. This is a very common approach, especially in anemic models. However, this approach can be problematic when applied to collections. Allowing consumers of your collection to freely modify the entire collection is very destructive. In this post, I will go over two scenarios in which a public setter on your collection can get you into trouble.

You cannot iterate over a null collection

Lets compare two approaches to empty a collection. One approach that we could take is to call the .Clear() method like so:

myCollection.Clear();  

Another approach, assuming we had a public setter, could be done by setting the collection to null:

myCollection = null;  

This is the first problem area for allowing public setters on your collections. By allowing your consumers to directly modify the collection, it's no longer possible to control how they will interact with your collection. Setting the entire collection to null for example, can potentially cause problems later in your programs life cycle.

Iterating over your collection is one of these pain points. The following code will behave differently if the collection is set to null, or if it is simply empty.

for(var item in collection)  
{
    // do stuff with collection items
}

If the collection was set to null, you will run into a lovely NullReferenceException, like the one below.

An unhandled exception of type 'System.NullReferenceException' occurred in YourProgramHere.exe

This is because all collections expose a GetEnumerator() method, which foreach leverages to iterate over your collection. If the collection is null, its GetEnumerator() method cannot be called. However, if the collection was simply empty via .Clear(), the run time error would not occur.

Events will not be triggered

Another problem that can arise is that it be very hard, if not impossible, to know when your collection has changed. Consider the following MyCollection class.

public class MyCollection  
{
    public ObservableCollection<int> Numbers { get; set; }

    public MyCollection()
    {
        Numbers = new ObservableCollection<int>();
        Numbers.CollectionChanged += collectionChanged;
    }

    private void collectionChanged(object sender, NotifyCollectionChangedEventArgs args)
    {
        Console.WriteLine("Hey! Listen!");
    }
}

This class defines a collection that will output some text when the collection changes. Now while this example only shows one collection, it could be entirely possible that this collection has many subscribers that wish to be notified when its contents change.

Using this collection, the only time the collectionChanged event will fire is when you leverage its .Add(), .Remove(), etc methods. Setting the collection to null will indeed empty the collection, but its subscribers will be unaware of anything that has happened.

In summary, when you expose a public setter on a collection, you are exposing much more behavior than you need to. It gives a lot of unnecessary control to your consumers, which could potentially put your collection into an undesirable state.

Prefer readonly to const

I was recently reading Effective C# by Bill Wagnar. In the book, the author makes the statement that we should prefer readonly to constants. I wasn't immediately able to piece together why we should, I mean, what's even the difference between the two? To answer this, let's first discuss how they're the same.

Both are static

Now obviously static readonly is static because it is decorated by the static keyword. One thing you may have overlooked however though, is that the const keyword is implicitly declared as static.

Static, put simply, means that whatever member it is decorating, belongs to the class itself rather than the object. The static member must be referenced by type and not by an instance. For example, the following code would not compile:

public class TheClass  
{
    public static void TotallyStaticMethod() { ... }
}
var someClass = new TheClass();  
someClass.TotallyStaticMethod(); // cannot be accessed with an instance reference;  

The above would generate a compiler error, specifically: Member 'TheClass.TotallyStaticMethod()' cannot be accessed with an instance reference; qualify it with a type name instead.

Both are immutable

Once declared readonly or const, the value cannot change. Now, there is a slight difference as to when the values become truly immutable. The value of a const must be initialized when it is declared. No sooner (though that'd be impressive) and no later.

On the other hand, readonly may be initialized during its declaration or in the constructor of the class that it was declared. This is useful for facilitating dependency injection through the constructor or even configuration values.

The biggest difference between the two?

Const is evaluated at compile time

What does this mean exactly? When you set the value of a constant, the compiler will actually take your variable assignment and bake it directly into the IL.

To see this in action, let's say we have the following code snippet.

namespace AnotherAssembly  
{
    public class Library
    {
        public static readonly string ReadOnlyValue = "first readonly";
        public const string ConstValue = "first const"; 
    }
}

If you were to compile the above code, reference the DLL in another project, and inspect the DLL with ILSPY (or your disassembler of choice) you would see the following output:

This clearly shows that our readonly variable is a reference type, referencing the ReadOnlyValue in the Library class which lives in AnotherAssembly. On the other hand, the const variable is being directly loaded with the value "first const".

This means that if you were to change the value of a const in an assembly, the assemblies that depend on that const will have the old value until they are rebuilt. This could cause a lot of headaches down the road, which is why it's always best to reserve const for values that you know will never change.

So while it is true that const will be slightly more performant than readonly, it's going to be a negligible amount. Always keep in mind that premature optimization is the root of all evil!

Summary

  1. const is evaluated at compile time, readonly is evaluated at runtime.
  2. Prefer const for values that you know will never change, for any reason.
  3. When in doubt, use readonly (the performance gain is negligible).

Design Patterns are Solutions

I believe there comes a point in every budding software developers career, when they will decide that they should learn design patterns. After all, there is more to developing than copying and pasting code from stack overflow, right? They want to be able to engineer their own solutions.

However, the majority all seem to fall into a similar trap. They read book after book. Design Patterns Explained, Head First Design Patterns, and lets not forget what is considered the holy grail of design patterns, the Gang of Four.

They may then move onto routinely performing katas, such as the the Greed Kata to fully memorize, and perfect their Strategy Pattern implementation. Blissfully awaiting the time where they can actually utilize all of the knowledge they have gained.

Unfortunately, the developer will end up trying to fit the [insert design pattern here] design pattern into every problem, where it probably does not belong. It's a shiny new tool, and they just want to use it everywhere. This is a textbook case of starting with a solution, and trying to find a problem. Which should sound pretty backwards!

You could think of reading about design patterns as reading the answer key for a test. The answer key is going to give specific questions with specific answers. You will probably do really well on that test, but you will leave more than likely not knowing why those were the answers. After all, you didn't put any work into actually solving the problem. You saw a pattern, you knew the answer, and just implemented it without a second thought. What happens on the next test that doesn't include an answer key, but the context is relatively the same?

Design patterns are solutions. To appreciate a solution, you must first suffer from the problem. The issue with diving into patterns head first is that you will never truly understand why the solution exists in the first place. The importance of actually experiencing the pains first hand are almost completely ignored.

The point that I want to make most clear is that you should not try to find solutions to problems that you don't have.

Now, I'm obviously not saying that reading about design patterns is completely taboo. There are a lot of things that can be learned from reading. When learning design patterns however, I believe that the best way is to simply write code and make mistakes.

Go ahead and copy and paste your switch statement throughout your code base, what does it really matter? You'll soon find out when you want to introduce a new case in your statement.