Clean Code, Quality

Using CQLinq

In a previous blog post we saw how we can detect common code smells using NDepend. In that version, I implemented the detection strategies with CQLinq. This worked great, but it did have some caveats:

  • I needed to duplicate some of the metric definitions across several detection strategies. (This might not be a problem in future version of NDepend)
  • It was hard to run the analysis across several Visual Studio solutions, as I needed to open each solution, load the NDepend project and the rule file and run the NDepend analysis.
  • If you wanted to process the NDepend results, you’d need to first export them as an XML file. Then you could write some custom code to transform from XML to whatever format you needed.

Using the NDepend.API

So, for a more automated analysis of large codebases (that might contain many solutions), I decided to migrate the detection strategies from CQLinq to the NDepend.API.

The translation went quite smooth and you can find the code here. The Getting Started documentation and the existing open source Power Tools (that come with the NDepend installation) helped me hit the ground running. The only thing I missed was the default notmycode queries. These come out of the box with the default CQLinq ruleset and help you exclude generated code. If you use the NDepend.API, you won’t have the notmycode prefix, so you need to re-implement the exclusions in code.

Now, I can point it to a folder that contains several Visual Studio solutions. It will create NDepend projects for all solutions, run the NDepend analysis, run the custom detection strategies and export the results as JSON. Of course there are trade-offs. With the NDepend.API, I didn’t have the quick feedback that you get when writing CQLinq rules.

If you’d like to have a look at the end result, you can checkout the code on GitHub.

Architecture, Quality

We have all used code analysis tools on our projects and these are useful for identifying some code smells. The issue is that most of them treat metrics in isolation and isolated metrics can’t tell you if the design is good or bad. You need more context.

In this blog post we’ll see how to go beyond code smells. We’ll see how to identify design smells and inappropriate coupling in the technical architecture. We’ll define detection strategies for common design smells (like God Class and Feature Envy) and implement them using NDepend. Last but not least, we’ll see how we can define fitness functions that detect dependency violations in our application’s architecture.

Continue Reading
Clean Code

This article recaps how to identify some of the most common code smells using NDepend. The basis of this series is Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu. This book describes (among other things) how you can use several targeted metrics to implement detection strategies for identifying design disaharmonies (code smells). You can read a summary of the book and my review in this article.

Detection Strategies

The design disharmonies are split into three categories: Identity Disharmonies, Collaboration Disharmonies and Classification Disharmonies. A Detection Strategy is a composed logical condition, based on a set of metrics for filtering.

Identity Disharmonies

Identity disharmonies affect methods and classes. These can be identified by looking at an element in isolation.

Collaboration Disharmonies

Collaboration Disharmonies affect the way several entities collaborate to perform a specific functionality.

Classification Disharmonies

Classification Disharmonies affect hierarchies of classes.

Conclusion

These detection strategies identify potential culprits. You need to analyze the candidates and decide if it’s an issue or just a false positive. I ended up adding some more project specific filters to ignore most of the false positives. Adding some basic where clause which exclude certain namespace or class name patterns can get you a long way. But, of course, these depend on your specific project and conventions. The beauty of NDepend is that you can update the queries as you wish: add filters, play with the thresholds or add more conditions.

Analyzing a suspect can be done in code, but you can also use other tools. NDepend has some views that can help you with the investigation: Treemaps, Dependency Graph, Dependency Structure Matrix, query results. In Object-Oriented Metrics in Practice the authors use Class Blueprints, but I don’t know a tool that can generate these views for .Net code.

After identifying the issues, you can start refactoring. For some strategies on how to tackle each disharmony or how to prioritize them, I recommend reading the book.

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify the Tradition Breaker code smell.

Tradition Breaker Detection Strategy

A class suffers from Tradition Breaker when it doesn’t use the protected members of its parent. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Tradition Breaker:

((NAS >= Average NOM per class) AND (PNAS >= Two Thirds)) AND
(((AMW > Average) OR (WMC >= Very High)) AND (NOM >= High)) AND
((Parent’s AMW > Average) AND (Parent’s NOM > High/2) AND (Parent’s WMC >= Very High/2))

This might seem complex on a first look. After we go over the definition for each metric, we’ll break this detection strategy in three distinct parts. This way we’ll see why the authors picked these conditions and it will make more sense.

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify the Refused Parent Bequest code smell.

Refused Parent Bequest Detection Strategy

A class suffers from Refused Parent Bequest when it doesn’t use the protected members of its parent. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Refused Parent Bequest:

(((NProtM > Few) AND (BUR < A Third)) OR (BOvR < A Third)) AND
(((AMW > AVerage) OR (WMC > Average)) AND (NOM > Average))

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify an afferent (incoming) coupling code smell: Shotgun Surgery.

Shotgun Surgery Detection Strategy

A method suffers from Shotgun Surgery if it is called many times from many other classes. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Shotgun Surgery:

(CM > Short Memory Cap) AND (CC > Many)

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify two types of efferent (outgoing) coupling code smells: Intensive Coupling and Dispersed Coupling.

Detection Strategies

Intensive Coupling

A method suffers from Intensive Coupling when it calls many other methods from a few classes. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Intensive Coupling:

(((CINT > Short Memory Cap) AND (CDISP < Half)) OR
  ((CINT > Few) AND (CDISP < A Quarter))) AND
  (MAXNESTING > Shallow)

Dispersed Coupling

A method suffers from Dispersed Coupling when it calls many other methods that are dispersed among many classes. The detection strategy for Dispersed Coupling is:

(CINT > Short Memory Cap) AND (CDISP >= Half) AND (MAXNESTING > Shallow)

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify the Brain Method code smell.

Brain Method Detection Strategy

Brain Methods are methods that centralize the intelligence of a class. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Brain Methods:

(LOC > HighLocForClass/2) AND (CYCLO >= High) AND (MAXNESTING >= Several) AND (NOAV > Many)

Continue Reading

Clean Code

In the previous articles in this series we’ve seen:

In this article we’ll see how to identify the Data Class code smell.

Data Class Detection Strategy

Data Classes are classes that expose their data directly and have few functional methods. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Data Classes:

(WOC < One Thrid) AND
                  (((NOPA + NOAM > Few) AND (WMC < High)) OR
                    ((NOPA + NOAM > Many) AND (WMC < Very High)))

Continue Reading

Clean Code

In the previous blog post we have seen how to detect potential God Classes with NDepend. In this article we’ll see how to detect methods that suffer from Feature Envy.

Feature Envy Detection Strategy

The feature envy code smell refers to methods that access data from other sources, rather than their own. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, proposes the following detection strategy for Feature Envy:

(ATFD > Few) AND (LAA < One Third) AND (FDP <= Few)

Continue Reading