Soft Skills

Software development is all about trade-offs. I was watching Dan North‘s fantastic presentation, Decisions, decisions, in which he talks about the fact that every decision is a trade-off. This got me thinking about my own decision making process. Every decision we make has its pros and cons. But sometime we seem to overstate the advantages while understating the disadvantages. I am guilty of this myself. In this blog post I’ll explain some of the mistakes I’ve made while making a decision and provide some tips on how to avoid them.

Common Mistakes in the decision making process

In this section I’ll list some of the flaws that I’ve recognized in my own decision making process at one point or another.

The Any Benefit approach

I first read about the Any Benefit mind set in Cal Newport’s book Deep Work (although I have been guilty of it way before that). This way of thinking leads to justifying a decision with a possible small benefit, ignoring all the negatives. I have been guilty of the any benefit approach many times. It’s easy to read an article on the web and immediately jump on the new trend bandwagon, without proper thought. How many times you wanted to immediately update to the newest major version of a framework, but that pesky senior said that it’s too risky? Or use an alpha release of this new library because it’s cool and then have to work around breaking changes. I’m not saying that this is the wrong decision, but that you should carefully analyze the trade-offs. Do the pros outweigh the cons?

Rich Hickey stated that “Programmers know the benefits of everything and the trade-offs of nothing“. This is because we focus too much on ourselves. But we should focus more on the quality of software, its maintainability and its fit for purpose. This is why I try to think of the main pros and cons of every important decision that I make. Before I go proposing a new solution, I try to play the devil’s advocate and think about its main disadvantages. This is a great way to show other people that you have thought this through. So, before making your next decision, think about the main reason why you shouldn’t do it.

Relying too much on Best Practices

Some of the the today’s antipatterns have been yesterday’s best practices. Singleton and lazy loading come to mind, but there are many others. The problem with best practices is that they are applicable only in a given context. The Cynefin framework suggests that best practices are a good strategy only in the obvious domain, where the relationship between cause and effect is clear. The problem is that we aren’t always in the obvious domain, so we should change our decision making strategy depending on the domain. Let’s take, for example, the Complicated Domain. This is the domain of “known unknowns”. A good decision making strategy in this domain is Sense – Analyze – Respond. This means that we should assess the facts, analyze the situation and respond by following good practices. This decision strategy relies on the fact that there might be many good options and a best practice does not exist.

I’ve been guilty of over relying on best practices too. Best practices like pair programming, automated testing, code review have their own downsides. I think they are valuable in most contexts, but this doesn’t undermine the fact that you should think about your  context before applying them. Sometimes I don’t know the answer to questions like “Why do we use X and not Y?”, even though I helped make that decision. At the time, the choice seemed obvious, so I didn’t really thought it through. Even if it’s the right choice, you should still have the right arguments. Answers like “this is a best practice” can seem random and don’t usually convince people.

Ignoring Cognitive Biases

Although we would like to think that we make only rational decisions, cognitive biases prove otherwise. A cognitive bias is a flaw in our judgment that can lead to irrational decisions. Let’s take a look at two different types of biases: decision making biases and social biases.

Decision making biases

The Confirmation bias is the tendency to interpret information in a way that confirms our preconceptions. As an example, if you want to switch to microservices and do a google search on the advantages of microservices architecture, you’ll find plenty of compelling reasons. But before making the move, you should also search for its drawbacks.

Another bias that you should be aware of is loss aversion. This bias states that people feel loses more deeply than gains of the same value. This is also related to the sunk cost effect – we invest more in a decision so we don’t lose what we have already invested in it. Basically, we ignore the negative outcomes of a choice because of our fear to lose what we have already invested in that decision. It’s always better to try to be objective and cut your loses early.

I fell victim to this bias while I was researching how to put coarse grained locks on NServiceBus Saga instances when using NHibernate persistence. After several hours of searching how to use coarse grained locks by using NHibernate listeners and checking if an entity is dirty, I noticed something in the NServiceBus documentation. They were stating that pessimistic locking is enabled by default on saga instances. So, if you use the default settings, you can’t get into an inconsistent state. But, after I invested so much time in researching a general solution to the NHibernate coarse grained lock problem, I was almost inclined to use this alternative, more complex solution with no extra benefits. Fortunately, I noticed that I was a victim of the sunk cost fallacy and deleted the extra code.

Social biases

Social biases can also play an important role in our decision making process. In Your Code as a Crime Scene, Adam Thornhill mentions two social biases that might lead to bad decisions: Pluralistic Ignorance and Inferring the popularity of an opinion based on its familiarity.

Pluralistic ignorance happens when everyone in a group privately rejects the norm, but assume that the majority accepts it, so they go along with it. An example will probably better explain this bias. Let’s say that you have a large suite of brittle broad stack tests that fail often. These tests rarely catch production bugs and usually fail because of the test code. Each team member knows that the return on investment for these tests has become negative. But, because they think that everyone else finds these tests valuable, they accept the norm and carry on fixing the tests, without solving the real problem. This is also an example of the sunk cost effect – because of the effort invested in implementing the tests, you don’t want to delete them.

Inferring the popularity of an opinion based on its familiarity is another common social bias. If someone in the team keeps repeating a strong opinion, we come to think that opinion is more common than it actually is. Most of us work in teams with people that have strong opinions. This is a good thing, as long as we do our own research and support our decisions with data. Your own experience might lead you to make the wrong decision. If you’ve been working in the same context for a long time, groupthink might lead to everyone having the same opinions. This can make the team members think that their opinions are more wide spread than they are, because of their familiarity inside the team.

Tips for improving your decision making process

After seeing these common mistakes, what can we do to make better decisions? Here are some tips that I find useful:

  • Be aware of the most common cognitive biases.  Knowing them is the first step in overcoming them.
  • Support your decisions with data. Always do your research. Make pros and cons lists. Don’t base decisions only on instinct. Data is much harder to argue against.
  • Play the devil’s advocate. Before making a decision, play the devil’s advocate. Think of the top three reasons why your idea won’t work.
  • Know your context. Be aware of context. The consultant’s answer – “it depends” – is many times the correct answer. A good decision in one context can become a bad decision when applied in a different context. The Cynefin framework is a good place to start if you want to make sense of your context. It can help you use the correct decision making process, based on your domain.
  • Use guiding principles. Every time you make a decision you should consider the high level principles that govern the product. Examples of these are business drivers or architectural principles. You might make a different decision if you’re optimizing for time to market than if you’re optimizing for maintainability.

What are the most common flaws in your decision making process and how do you overcome them?

 

Books, Quality

In the last couple of months I’ve been learning about what information can I extract from a codebase. I’ve written some articles on how to use NDepend to extract a static view of the system’s quality. But this view is based only on the current state of the codebase. What about source code history? What can it tell us? How has the code changed? These are exactly the kind of questions that Adam Thornhill‘s book, Your Code as a Crime Scene: Use Forensic Techniques to Arrest Defects, Bottlenecks, and Bad Design in Your Programs, tries to answer.

Continue Reading

Clean Code

This article recaps how to identify some of the most common code smells using NDepend. The basis of this series is Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu. This book describes (among other things) how you can use several targeted metrics to implement detection strategies for identifying design disaharmonies (code smells). You can read a summary of the book and my review in this article.

Detection Strategies

The design disharmonies are split into three categories: Identity Disharmonies, Collaboration Disharmonies and Classification Disharmonies. A Detection Strategy is a composed logical condition, based on a set of metrics for filtering.

Identity Disharmonies

Identity disharmonies affect methods and classes. These can be identified by looking at an element in isolation.

Collaboration Disharmonies

Collaboration Disharmonies affect the way several entities collaborate to perform a specific functionality.

Classification Disharmonies

Classification Disharmonies affect hierarchies of classes.

Conclusion

These detection strategies identify potential culprits. You need to analyze the candidates and decide if it’s an issue or just a false positive. I ended up adding some more project specific filters to ignore most of the false positives. Adding some basic where clause which exclude certain namespace or class name patterns can get you a long way. But, of course, these depend on your specific project and conventions. The beauty of NDepend is that you can update the queries as you wish: add filters, play with the thresholds or add more conditions.

Analyzing a suspect can be done in code, but you can also use other tools. NDepend has some views that can help you with the investigation: Treemaps, Dependency Graph, Dependency Structure Matrix, query results. In Object-Oriented Metrics in Practice the authors use Class Blueprints, but I don’t know a tool that can generate these views for .Net code.

After identifying the issues, you can start refactoring. For some strategies on how to tackle each disharmony or how to prioritize them, I recommend reading the book.

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify the Tradition Breaker code smell.

Tradition Breaker Detection Strategy

A class suffers from Tradition Breaker when it doesn’t use the protected members of its parent. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Tradition Breaker:

((NAS >= Average NOM per class) AND (PNAS >= Two Thirds)) AND
(((AMW > Average) OR (WMC >= Very High)) AND (NOM >= High)) AND
((Parent’s AMW > Average) AND (Parent’s NOM > High/2) AND (Parent’s WMC >= Very High/2))

This might seem complex on a first look. After we go over the definition for each metric, we’ll break this detection strategy in three distinct parts. This way we’ll see why the authors picked these conditions and it will make more sense.

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify the Refused Parent Bequest code smell.

Refused Parent Bequest Detection Strategy

A class suffers from Refused Parent Bequest when it doesn’t use the protected members of its parent. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Refused Parent Bequest:

(((NProtM > Few) AND (BUR < A Third)) OR (BOvR < A Third)) AND
(((AMW > AVerage) OR (WMC > Average)) AND (NOM > Average))

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify an afferent (incoming) coupling code smell: Shotgun Surgery.

Shotgun Surgery Detection Strategy

A method suffers from Shotgun Surgery if it is called many times from many other classes. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Shotgun Surgery:

(CM > Short Memory Cap) AND (CC > Many)

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify two types of efferent (outgoing) coupling code smells: Intensive Coupling and Dispersed Coupling.

Detection Strategies

Intensive Coupling

A method suffers from Intensive Coupling when it calls many other methods from a few classes. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Intensive Coupling:

(((CINT > Short Memory Cap) AND (CDISP < Half)) OR
  ((CINT > Few) AND (CDISP < A Quarter))) AND
  (MAXNESTING > Shallow)

Dispersed Coupling

A method suffers from Dispersed Coupling when it calls many other methods that are dispersed among many classes. The detection strategy for Dispersed Coupling is:

(CINT > Short Memory Cap) AND (CDISP >= Half) AND (MAXNESTING > Shallow)

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify the Brain Method code smell.

Brain Method Detection Strategy

Brain Methods are methods that centralize the intelligence of a class. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Brain Methods:

(LOC > HighLocForClass/2) AND (CYCLO >= High) AND (MAXNESTING >= Several) AND (NOAV > Many)

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify the Data Class code smell.

Data Class Detection Strategy

Data Classes are classes that expose their data directly and have few functional methods. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Data Classes:

(WOC < One Thrid) AND
                  (((NOPA + NOAM > Few) AND (WMC < High)) OR
                    ((NOPA + NOAM > Many) AND (WMC < Very High)))

Continue Reading

Clean Code

In the previous blog post we have seen how to detect potential God Classes with NDepend. In this article we’ll see how to detect methods that suffer from Feature Envy.

Feature Envy Detection Strategy

The feature envy code smell refers to methods that access data from other sources, rather than their own. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Feature Envy:

(ATFD > Few) AND (LAA < One Third) AND (FDP <= Few)

Continue Reading