conference

2 posts

Best of GOTO Chicago 2018

Best of GOTO Chicago 2018

OS Panel

speaking at GOTO Chicago's Open Source panel

Wow, what a week! I'll spare you from any May the fourth be with you puns (unless that in itself counts) and dive right into the best of GOTO Chicago 2018.

This past week I had the opportunity to attend my first GOTO conference, and overall I would say it was a good experience. I honestly was not blown away by the sponsor booths, and some of the talks were a little rough around the edges... but what conference doesn't have its fair share of flops? We're not here to talk about those though, I want to highlight the talks and experiences that really stuck out.

Let's start with...

Testing legacy code

My first exposure to GOTO was an all-day workshop hosted by Michael C. Feathers, the author of the Working Effectively with Legacy Code. While the workshop consisted mostly of techniques for getting legacy code into a testable state, the content presented that was presented would take up an entire blog post in itself (which I will assuredly do at some point in the near future). So I am just going to focus on two key elements that really resonated with me.

public versus published
We've all heard of access modifiers (private, public, etc), but what about published? The easiest way to describe it is to give an example, comparing it to public.

Public should be familiar territory. If something is marked as public, it can be accessed by other classes, subclasses, and even assemblies. Within an organization, it should be relatively easy to change public signatures. Sure it might be a lot of work to do so, depending on how many projects reference the public interface, but it's clear to see what needs to change and you can have a high level of confidence that you won't break anything. You may even have the ability to update all of the references yourself (which I would encourage you to do so).

Published, on the other hand, you don't have any control over. A prime example would be an API that other companies leverage. You can't change the interface willy-nilly because you don't have the ability to update the references. Published is an abstract concept. You don't really know if something is marked as published, because there isn't an access modifier for it. It just isn't supported in the language.

While this makes complete sense, I was just happy to finally put some context around the concept. Furthermore, I've been officially converted to the camp of: If you're changing a public interface, do your due diligence and update the references yourself. No more of this obsoleting a method that will never be updated.

Best is the enemy of better

In reality, after some google-fu, the official quote is:

Perfect is the enemy of good - Voltaire

Regardless, to me this was a simple yet powerful quote that I previously had not heard before. It stems from the idea that so many times in our career we spin our wheels indefinitely while trying to seek out the perfect solution. Maybe we never implement the perfect solution, because it's just not possible right now. However, it may be possible to implement a good solution. We just fail to realize it sometimes.

A big part of the workshop was refactoring code to make it testable. To do so, we made a lot of new classes that the old classes just called into. Our old code essentially just became wrappers around the new code. Obviously, this is not a perfect solution, but it's a step in the right direction to get the code into a clean and testable state.

After wrapping up the testing workshop, it's time to head onto over to the first Keynote of the conference.

Drifting into failure

Presentation: https://www.youtube.com/watch?v=mFQRn_m2mP4

The first keynote of the conference was given by Adrian Cockcroft, the VP of Cloud Architecture Strategy for Amazon. It was a little surprising, as he wasn't on the original agenda. Apparently the original speaker came down with a cold, so he filled in. Honestly, I'm glad he did. It was a very enlightening talk. Adrian focused on a central theme, drifting into failure.

Drifting into failure stems from the idea that people in the workplace may not report issues to upper management for fear of inadequacy or even worse, losing their job. These issues, albeit small at the time, start to pile up eventually leading to a catastrophic event.

One example of this was the fact that airlines that had more reported incidents, actually had less fatalities. At first, this seems pretty counter intuitive. How can you have more incidents, but less catastrophic events? Does this mean that an airline with less reported incidents actually had more fatalities? Yes.

The take away here is that incidents are going to occur no matter what. It's just a matter if they're actually reported. Just because an incident isn't reported, doesn't mean it didn't happen. We can't make our processes better if we are unaware of the problems that are occurring. We need to embrace blameless postmortem cultures so that we become aware of these incidents and do not eventually drift into failure. So how can we prevent this from happening? Create non-events.

Adrian spoke of dynamic non-events. It wasn't clear to me what the actual definition was, so I looked up the meaning in its entirety.

Reliability is a dynamic non-event. It is dynamic because the processes remain within acceptable limits due to moment-to-moment adjustments and compensations by the human operators. It is a non-event because safe outcomes claim little or no attention. - Karl Weick

Creating a non-event essentially means to report abnormal behavior, or at least be aware of it. It doesn't have to be catastrophic, and in all honesty it shouldn't be. If a metric is slightly incorrect or just under the minimum, it should be reported and discussed before it balloons into a much larger problem.

The example that he gave comes from a book, Drifting into Failure by Sidney Dekker. A component of an aircraft passed all safety measures, but barely. It was never reported, because everything was technically fine. The employees did everything they were supposed to do as laid out by the system, but in this case, the system was wrong. It lead to a disastrous crash, killing everyone on board the craft.

Non-events are a time for people to report and learn. No one should be afraid to report bad news, or the possibility of bad news. We should strive to actively create safety. His recommendation to create safety?

Break things on purpose!

GOTO had quite a bit of content on a concept that I wasn't completely aware of, chaos engineering. I knew it existed, but never really dove into the specifics of how it worked or how to practice it. This is another topic that I could easily dedicate a whole post to, so we'll just touch on some key takeaways.

Chaos engineering exists because production hates us. Leveraging DevOps is a step in the right direction, but unfortunately, production is a war zone. Nothing can guarantee that when you push the deploy button, everything is going to work exactly as you expect it to. In all honestly, production doesn't want your code to work. Chaos engineering is really about engineering ourselves out of chaos. Not introducing it into our systems, that would just be silly.

It was noted that people actually cause the most chaos. One example that was given was a command line argument. If a certain command was passed with the letter g, everything was a-OK. However, if you were to forget in that letter g, all hell would break lose. One may think this is just another case of user error, but the speaker presented it as a system error. Why is the line so small between success and failure?

So how do we implement chaos engineering? Typically through learning loops. Chaos engineering is all about "you know what you don't know", so you should actively prod those possible failure cases. The implementation steps were broken down as follows:

  1. Prod the system
  2. Hypothesis of why it happened
  3. Communicate to your team
  4. Run experiments
  5. Analyze the results
  6. Increase the scope
  7. Automate the experiments

In the end, the goal with chaos engineering isn't to break the system intentionally. You would never want to intentionally take down production and impact your customers. If you know of a failure case and how it behaves, there really isn't much value in running a chaos experiment. Everything is already known!

Old is the NEW new

Presentation: https://www.youtube.com/watch?v=AbgsfeGvg3E

I love Kevlin as a speaker, he always does a fantastic job. I highly recommend giving the presentation itself a watch, but the Cliffs notes are essentially that everything in computer science has already been done. Most of the new stuff we see today has appeared in literature that existed as early as the 50s. Software design principles, testing approaches, you name it.

The vast majority of the presentation was comparing new ideas to old ideas. Highlighting the fact that we've been thinking about and solving the same problems for quite some time now.

To me this is very similar to Uncle Bob's post, The Churn. Which states that we as developers are so infatuated with learning all of the new and shiny frameworks and libraries that are released, we drift away from the basics. We forget core principles, and never really advance because we're stuck in the churn of learning every new thing that comes out.

I agree with most of it. I couldn't imagine trying to pick up every little new thing that pops up, and frankly I just don't think it's possible.

In the end, it was a worthwhile conference. I may just have to add it to my list of yearly must GO-TO conferences...

Until next time!

DEVintersection 2017 Shenanigans

DEVintersection 2017 Shenanigans

Well, that's a wrap! DEVintersection is over after a long week of sessions, technical difficulties, and an intense all-day workshop. I'm back in my hotel room at the MGM Grand, but we need to talk about everything that happened this week before I inevitably forget.

Docker all the things!

For starters, one thing is clear, Docker isn't going anywhere. Every speaker was leveraging Docker in some form in all of their sessions, even if the talk had nothing to do with it. Scott Hanselman gave one of the keynotes, Microsoft's open source journey, and he somehow managed to segway into demoing a raspberry pi cluster leveraging Kubernetes and Docker that hosted a .NET Core web application.

Coolness factor aside, it was actually pretty interesting to learn that .NET was actually "open sourced" a long time ago, the early 2000s I believe he said. Though it was just shared as a zip file and was intended for academia. The source code was sanitized as to not leak IP, and they refused to accept any contributions. Not that there were really any channels that would allow it. So in reality, it was more... "source opened".

So.. what exactly is DevOps?

After the keynote I attended a session called How DevOps Practices Can Make You a Better Developer and while the subject matter was pretty straightforward: defining what CI (continuous integration), CD (continuous delivery), etc were.. the biggest takeaway for me was Microsoft's definition of DevOps.

DevOps is the union of people, process, and products to enable continuous delivery of value to our end users. Donovan Brown, Microsoft DevOps PM

It made a lot of sense. DevOps is about so much more than just delivery. The above definition really highlights how so many companies can get DevOps "teams" wrong. It isn't a team! It's combining process and people together in order to deliver software to our users. That means that DevOps is not only delivery, but infrastructure, testing, monitoring, and so much more.

Another common theme? Microservices!

Yep, on top of containerization, microservices was another big discussion point of the conference. Which in all honesty, makes sense. The two go pretty well together.

I attended a few sessions relating to microservices, and though they were a little introductory, there were definitely some highlights I felt jotting down. One of the sessions I went to was hosted by Dan Whalin, a pretty big name in the JavaScript world, and he outlined a quick list of four questions to ask yourself before taking the plunge into microservices.

  1. Are multiple teams working on different business areas of the application?
  2. Is it hard for new team members to understand the application and contribute quickly?
  3. Are there frequently changing business rules?
  4. Can subsystems within the application be changed with a minimal amount of impact?

I believe there's a lot of value in taking a step back and questioning whether microservices are right for you and your project. Like everything else in our field, there are no silver bullets. Microservices are a good solution to an array of problems, but it's not one size fits all. The additional complexity, especially in infrastructure, may not be worth the investment if you don't answer "yes" to all of the above, or at least some.

To top off my microservices immersion, I also attended a session on implementing a microservices messaging architecture, specifically with Azure Event Hubs. Now, the session was cut pretty short. The speaker's presentation apparently relied heavily on the Internet for some demos, which.. did not get up and running for probably a good 30 minutes. They even attempted to use their cell phone to tether a connection. A little unfortunate, but I think they made their point.

Their main points? When using messaging and events, events are facts. They happened! And, you shouldn't necessarily immediately jump into breaking up your database architecture into consistent and eventually consistent. Start with everything going to the same destination (consistent store), and if/when you need to scale, you can take an eventually consistent approach.

I promise there were other session topics..

As of late, I've been reading and writing more functional code. It's a completely different way of thinking, and there are a lot of great benefits to doing so. I saw a session entitled Functional Techniques for C# and had to give it a go.

There were a lot of defining of terms, which I am completely OK with. I seem to always forget the definitions of words, so getting refreshers on currying and high order functions was a nice introduction to the talk.

The biggest takeaway I had from the session was the idea of Functional Refactoring. Now, we've all seen Imperative Refactoring. It's the type of refactoring that most of us are familiar with. It's where we pull out some code and shove it into a method, then call that method instead of the code.

Here's a quick refresher..

public void PrintResponse(int choice)  
{
  var output = string.Empty;
  switch(choice)
  {
    case 1:
      output = "Hello";
      break;
    case 2:
      output = "Goodbye";
      break;
  }
  Console.WriteLine(output);
}

Now in this .. case .. PrintResponse has two responsibilities: it has to figure out what the response is and then it has to print it. Applying some inside-out refactoring, we get the following solution:

public void PrintResponse(int choice)  
{
  var output = GetResponse(choice);
  Console.WriteLine(input);
}

private string GetResponse(int choice)  
{
  var output = string.Empty;
  switch(choice)
  {
    case 1:
      output = "Hello";
      break;
    case 2:
      output = "Goodbye";
      break;
  }

  return output;
}

Now PrintResponse only has one job, to print the response. The job of getting the response has been abstracted away into a new private method called GetResponse

Functional refactoring is a little different, again, it's outside-in rather than inside-out as I just showed. Here's a simple example:

public void Save()  
{
  try
  {
    context.Save();
  }
  catch(Exception ex)
  {
    Logger.Log(ex);
  }
}

Applying functional refactoring..

public void Save()  
{
  Handling.TryWithLogging(() => context.Save());
}

public static void TryWithLogging(Func<DbContext> operation)  
{
  try
  {
    operation();
  }
  catch(Exception ex)
  {
    Logger.Log(ex);
  }
}

So as you can hopefully see, we've taken the outside code, the using block, and replaced it with a TryWithLogging higher-order function. With this approach, you can wrap using statements for calls to your database, wrap try and catch blocks to automatically log to the database on failure instead of copy pasting Logger.Log(ex); at the end of every catch.

Lastly an all day grind.. with C#!

I was excited about this workshop. I have honestly never done a workshop before at a conference, so I didn't really have any idea what to expect. All I knew was that it was going to be an all-day event, in a room, with the one and only Kathleen Dollard.

There was so much information in this workshop, it might even need to be a blog post all by itself, but I wanted to highlight some of the key points that I felt stood out and that I resonate with.

The first part of the workshop really focused on teams and learning. Improving yourself and others. She spoke to that research has shown that within a team, the who didn't actually matter, that is, the team's diversity. But rather, teams performed better if everyone spoke around the same amount and that there was psychological safety (no fear of belittlement during a review or just for speaking up). It's interesting to think about anyway, as I feel most of us are conditioned to believe that diversity plays a big role in the success of a given team.

She also spoke about cognitive dissonance. Which essentially states that you will take new input and associate it with your current perception. Seems obvious when it's put like that, but it makes a lot of sense and really shows why things such as code reviews are so important. We really need to be challenging what we know, day to day, so we can take that new information and apply it correctly.

For example, I once had this preconceived notion a long time ago that several meant seven in mathematics. This perception caused me to get stuck on a particular mathematics problem for quite some time because I just couldn't reason about how the problem could even be solved. The question didn't make any sense in the context of several being seven. The world was much more clear to me when I was corrected. This is why code reviews are so beneficial. If our perception is wrong, and we stay in our bubbles, we will continue to be wrong.

Now, all of that really has little to do with C#, but I found it immensely helpful as a software professional. The tips and tricks were great but mostly focused on functional techniques that I discussed previously, just in greater detail.

All good things must come to an end...

And with that, the conference was over. I got to spend a good 30 minutes talking with Kathleen after everyone else had left the workshop room about her new position at Microsoft and what they were up to. Most others had to catch flights or had other arrangements. I for some reason thought the red eye was a good idea and wouldn't be leaving until 10:00 that night..

Until the next conference!