Clean Code, Quality

Using CQLinq

In a previous blog post we saw how we can detect common code smells using NDepend. In that version, I implemented the detection strategies with CQLinq. This worked great, but it did have some caveats:

  • I needed to duplicate some of the metric definitions across several detection strategies. (This might not be a problem in future version of NDepend)
  • It was hard to run the analysis across several Visual Studio solutions, as I needed to open each solution, load the NDepend project and the rule file and run the NDepend analysis.
  • If you wanted to process the NDepend results, you’d need to first export them as an XML file. Then you could write some custom code to transform from XML to whatever format you needed.

Using the NDepend.API

So, for a more automated analysis of large codebases (that might contain many solutions), I decided to migrate the detection strategies from CQLinq to the NDepend.API.

The translation went quite smooth and you can find the code here. The Getting Started documentation and the existing open source Power Tools (that come with the NDepend installation) helped me hit the ground running. The only thing I missed was the default notmycode queries. These come out of the box with the default CQLinq ruleset and help you exclude generated code. If you use the NDepend.API, you won’t have the notmycode prefix, so you need to re-implement the exclusions in code.

Now, I can point it to a folder that contains several Visual Studio solutions. It will create NDepend projects for all solutions, run the NDepend analysis, run the custom detection strategies and export the results as JSON. Of course there are trade-offs. With the NDepend.API, I didn’t have the quick feedback that you get when writing CQLinq rules.

If you’d like to have a look at the end result, you can checkout the code on GitHub.

Architecture, Clean Code, Quality

Last week was a good week for the IT community in Iasi thanks to Codecamp – 2018 autumn edition. One of their masterclasses caught my eye – Crafting Code by Sandro Mancuso. I have been following Sandro‘s work for a while now, so this was a great opportunity for me to put the theory into practice . This blog post contains some of the things I’ve learned during the training.

This was a 2 day, hands-on course, focused on TDD, using mocks as a design tool through Outside-In TDD and working with Legacy Code. All exercises required pairing, which was a good opportunity to meet and learn from other people.

TDD

The focus of the first day was to learn the basics of TDD. Here are some of the highlights:

  • Think of tests as specifications for the unit under test.
  • How to name a test. Always try to make your code read well in English. If you’re testing an Account class, name the test class AccountShould. Then each test should continue from there – e.g.: Increase_Current_Balance_When_Making_A_Deposit. This reads nicely, contains terms used by the business (ubiquitous language), and specifies clearly what the test does.
  • The order in which to write the Given, When, Then is important. Start with Then, since this should be obvious from the test name. Then write the When and the Given. Implementing the steps in this order will keep the test focused and ensure we’re not doing too much in the Given step.
  • If the test that you’ve just written goes immediately to Green, then maybe the previous test took too big of a leap. TDD is about Red, Green, Refactor, not Red, Green, Green,…Green, Big Refactor.
  • Do not treat exceptional cases and the happy path at the same time. First flesh out the happy path, then add edge cases. This will usually get you to the solution faster.
  • Try to avoid the False Sense of Progress – writing lots of tests that pass quickly without helping you identify the solution. You should write the smallest test that points you in the right direction (i.e. the solution).
  • How to test a method that returns void – look for side effects without breaking encapsulation
  • Don’t believe the single assert myth. A test should contain a single logical assert. We can have more than one assert statements in a test. But they need to be logically grouped together.

After that, we focused on the two main styles of TDD, classicist and outside-in. (Sandro also mentioned a more extreme style – TDD as if you meant it. If you want to check it out have a look at Adrian Bolboaca‘s blog)

Classicist (Chicago school)

  • This is a good way to test drive an algorithm, data manipulation or conversion, when you know the inputs and outputs, but you don’t know anything about the implementation.
  • The design happens in the Refactor step. Because of this, it can be harder to get to a good design if the unit under test touches many domains (e.g Payment, Shipping).
  • Use the transformation priority premise to get from Red to Green. This can help you avoid writing test code that duplicates production code.
  • As the tests get more specific, the code gets more generic. So look for ways to move data out of the algorithm.
  • You cannot refactor a switch cases step by step. You need to rewrite the whole thing. So try to avoid them when test driving an algorithm.
  • Recommend book: Test Driven Development: By Example by Kent Beck

Outside-In (London school)

  • Use this when you have an idea about the implementation and the internals of the unit under test.
  • Use mocks as a design tool. Mocks get a bad name because many people misuse them. They can be a powerful tool when they are used correctly.
  • Most use cases don’t require strict mocking. Some really high risk apps (for health care, rockets, nuclear plants) might benefit from it.
  • Don’t mock private methods, even if the framework allows it. Even though you would write more tests, it would not lead to a better design.
  • Don’t use Argument.Any when verifying method calls. The arguments are part of the contract, so they should be checked.
  • Recommended book: Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce.

Using Outside-In TDD to implement a business feature

We started the second day with an ATDD exercise. Sandro took this opportunity to talk about Outside-In Design:

Architecture vs. Design 

  • Architecture – These are the systems that are part of the product and the way they interact. Each one should be treated as a black box. Simon Brown‘s container view (part of the C4 model) came to mind.
  • Macro Design – the architecture of each system. This is where you choose MVC, layers, ports and adapters, clean architecture (Simon Brown has an interesting post on the different styles).
  • Micro Design – how classes collaborate, what modules do you need?

When practicing Outside-In TDD, it is recommended to think about the application’s architecture and macro design beforehand. Than you can use TDD to drive the micro design. When you start thinking of how to make the first Acceptance Test pass, you’ll need to make lots of design decisions, before writing any code.

Test Types

There are a lot of conflicting definitions for test types. What’s important is for your team to know exactly what you mean when you say, for example, Integration Test or Component Test. Sandro briefly described a potential test classification:

  • Acceptance Test – to test a behavior of the system. The entry point is usually the Application Service (from DDD, Use Case in Clean Architecture or Action in Interaction-Driven Development). External dependencies (e.g. Databases) can be mocked (white box testing) or we could use the real implementation (black box testing)
  • Unit test – the unit under test is a single class or a small group of classes
  • Component Test – the unit under test is the Domain Model
  • Feature Test – the unit under test is the Application Service  and the Domain Model
  • Integration Test – testing classes at the system boundaries (e.g. testing the SQL implementation of a Repository)
  • User Journey Test (the unit under test is the UI and the backend is mocked)

You start with an Acceptance Test, then move to the other test types, as needed, while mocking collaborators.

Testing and Refactoring Legacy Code

This is the part that really impressed many of us in the audience. I’ve seen Sandro’s session on Testing and Refactoring Legacy Code in 2013, but I enjoyed seeing it live. This is one of the most useful presentation I’ve seen because it was immediately applicable to the work I was doing. It also led me to Michael Feathers‘ Working Effectively with Legacy Code. If you’re working with legacy code, you need to read this book. It will help you when you get stuck.

Some tips from the session:

  • Use Dependency Breaking techniques (e.g. Subclass and override method) in order to write tests for legacy code.
  • Test from the shallowest branch, since it contains the lowest number of dependencies.
  • Refactor from the deepest branch.
  • Use Test Data Builders  to make tests more readable.
  • Use Guard Clauses to make the happy path more visible.
  • Use the Balanced Abstraction Principle to make sure that everything in a method is at the same level of abstraction. Public methods should tell a story.

Conclusion

As I said, I was aware of Sandro’s work. Things made sense while reading the blog posts but only “clicked” during the course. This is because the course relied on coding exercises, pairing and on Sandro critiquing our code (which he did a lot!). And we all know that there is no learning without experimentation and playing around.

At the end of the course, my only complaint was about the fact that it was ending when we started to delve deeper into more advanced topics: design and architecture. Fortunately there is a another course that tackles these subjects – Crafted Design. So hopefully I’ll attend that one soon!

In conclusion, this was the best training I’ve attended. Sandro’s passion and experience were visible from the get go. The advice was pragmatic. The discussion about different options he considered while designing also gave us a glimpse into his train of thought. It was great to have the opportunity to learn from a software craftsman.  And, as a bonus, we also talked a bit about BDD and DDD, which helped me confirm some of my ideas and see other things in a new light.

So don’t miss the chance to attend this course!

Clean Code

This article recaps how to identify some of the most common code smells using NDepend. The basis of this series is Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu. This book describes (among other things) how you can use several targeted metrics to implement detection strategies for identifying design disaharmonies (code smells). You can read a summary of the book and my review in this article.

Detection Strategies

The design disharmonies are split into three categories: Identity Disharmonies, Collaboration Disharmonies and Classification Disharmonies. A Detection Strategy is a composed logical condition, based on a set of metrics for filtering.

Identity Disharmonies

Identity disharmonies affect methods and classes. These can be identified by looking at an element in isolation.

Collaboration Disharmonies

Collaboration Disharmonies affect the way several entities collaborate to perform a specific functionality.

Classification Disharmonies

Classification Disharmonies affect hierarchies of classes.

Conclusion

These detection strategies identify potential culprits. You need to analyze the candidates and decide if it’s an issue or just a false positive. I ended up adding some more project specific filters to ignore most of the false positives. Adding some basic where clause which exclude certain namespace or class name patterns can get you a long way. But, of course, these depend on your specific project and conventions. The beauty of NDepend is that you can update the queries as you wish: add filters, play with the thresholds or add more conditions.

Analyzing a suspect can be done in code, but you can also use other tools. NDepend has some views that can help you with the investigation: Treemaps, Dependency Graph, Dependency Structure Matrix, query results. In Object-Oriented Metrics in Practice the authors use Class Blueprints, but I don’t know a tool that can generate these views for .Net code.

After identifying the issues, you can start refactoring. For some strategies on how to tackle each disharmony or how to prioritize them, I recommend reading the book.

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify the Tradition Breaker code smell.

Tradition Breaker Detection Strategy

A class suffers from Tradition Breaker when it doesn’t use the protected members of its parent. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Tradition Breaker:

((NAS >= Average NOM per class) AND (PNAS >= Two Thirds)) AND
(((AMW > Average) OR (WMC >= Very High)) AND (NOM >= High)) AND
((Parent’s AMW > Average) AND (Parent’s NOM > High/2) AND (Parent’s WMC >= Very High/2))

This might seem complex on a first look. After we go over the definition for each metric, we’ll break this detection strategy in three distinct parts. This way we’ll see why the authors picked these conditions and it will make more sense.

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify the Refused Parent Bequest code smell.

Refused Parent Bequest Detection Strategy

A class suffers from Refused Parent Bequest when it doesn’t use the protected members of its parent. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Refused Parent Bequest:

(((NProtM > Few) AND (BUR < A Third)) OR (BOvR < A Third)) AND
(((AMW > AVerage) OR (WMC > Average)) AND (NOM > Average))

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify an afferent (incoming) coupling code smell: Shotgun Surgery.

Shotgun Surgery Detection Strategy

A method suffers from Shotgun Surgery if it is called many times from many other classes. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Shotgun Surgery:

(CM > Short Memory Cap) AND (CC > Many)

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify two types of efferent (outgoing) coupling code smells: Intensive Coupling and Dispersed Coupling.

Detection Strategies

Intensive Coupling

A method suffers from Intensive Coupling when it calls many other methods from a few classes. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Intensive Coupling:

(((CINT > Short Memory Cap) AND (CDISP < Half)) OR
  ((CINT > Few) AND (CDISP < A Quarter))) AND
  (MAXNESTING > Shallow)

Dispersed Coupling

A method suffers from Dispersed Coupling when it calls many other methods that are dispersed among many classes. The detection strategy for Dispersed Coupling is:

(CINT > Short Memory Cap) AND (CDISP >= Half) AND (MAXNESTING > Shallow)

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify the Brain Method code smell.

Brain Method Detection Strategy

Brain Methods are methods that centralize the intelligence of a class. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Brain Methods:

(LOC > HighLocForClass/2) AND (CYCLO >= High) AND (MAXNESTING >= Several) AND (NOAV > Many)

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify the Data Class code smell.

Data Class Detection Strategy

Data Classes are classes that expose their data directly and have few functional methods. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Data Classes:

(WOC < One Thrid) AND
                  (((NOPA + NOAM > Few) AND (WMC < High)) OR
                    ((NOPA + NOAM > Many) AND (WMC < Very High)))

Continue Reading

Clean Code

In the previous blog post we have seen how to detect potential God Classes with NDepend. In this article we’ll see how to detect methods that suffer from Feature Envy.

Feature Envy Detection Strategy

The feature envy code smell refers to methods that access data from other sources, rather than their own. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Feature Envy:

(ATFD > Few) AND (LAA < One Third) AND (FDP <= Few)

Continue Reading