Clean Code, Quality

Using CQLinq

In a previous blog post we saw how we can detect common code smells using NDepend. In that version, I implemented the detection strategies with CQLinq. This worked great, but it did have some caveats:

  • I needed to duplicate some of the metric definitions across several detection strategies. (This might not be a problem in future version of NDepend)
  • It was hard to run the analysis across several Visual Studio solutions, as I needed to open each solution, load the NDepend project and the rule file and run the NDepend analysis.
  • If you wanted to process the NDepend results, you’d need to first export them as an XML file. Then you could write some custom code to transform from XML to whatever format you needed.

Using the NDepend.API

So, for a more automated analysis of large codebases (that might contain many solutions), I decided to migrate the detection strategies from CQLinq to the NDepend.API.

The translation went quite smooth and you can find the code here. The Getting Started documentation and the existing open source Power Tools (that come with the NDepend installation) helped me hit the ground running. The only thing I missed was the default notmycode queries. These come out of the box with the default CQLinq ruleset and help you exclude generated code. If you use the NDepend.API, you won’t have the notmycode prefix, so you need to re-implement the exclusions in code.

Now, I can point it to a folder that contains several Visual Studio solutions. It will create NDepend projects for all solutions, run the NDepend analysis, run the custom detection strategies and export the results as JSON. Of course there are trade-offs. With the NDepend.API, I didn’t have the quick feedback that you get when writing CQLinq rules.

If you’d like to have a look at the end result, you can checkout the code on GitHub.

Quality

Testing is an important part of building a product right. Continuous Delivery makes that more explicit by building quality in. In this blog post we’ll see how you can start off testing on the wrong foot. Then we’ll see how asking basic questions like Why, What, How and Where can help you define a sound test strategy in a Continuous Delivery context.

The Deployment Pipeline

Most of the teams nowadays think about Continuous Delivery. Continuous Delivery means automating the release process, from code merge to production release. How do you do that? By using the deployment pipeline pattern. The deployment pipeline models and automates the release process. Here is an example:

A deployment pipeline example
A deployment pipeline example
Continue Reading
Architecture, Quality

I think that everybody agrees that testing is required in order to build a quality product. But there’s also a lot of confusion about the boundaries of each test type. What’s the scope of a unit test? What’s the difference between an integration test, an integrated test and a contract test? If you ask 3 developers about test boundaries, you’ll most likely get 3 different answers. For example, I still talk to people who consider that a unit test should test a single class/method.

What’s clear is that most teams don’t have a consensus on what’s the scope of the different types of automated tests and the differences between them. Getting to a universal consensus might be hard, but getting to a consensus inside the team should be easy enough. In this blog post we’ll see an example of how to do that.

Continue Reading
Architecture, Quality

We have all used code analysis tools on our projects and these are useful for identifying some code smells. The issue is that most of them treat metrics in isolation and isolated metrics can’t tell you if the design is good or bad. You need more context.

In this blog post we’ll see how to go beyond code smells. We’ll see how to identify design smells and inappropriate coupling in the technical architecture. We’ll define detection strategies for common design smells (like God Class and Feature Envy) and implement them using NDepend. Last but not least, we’ll see how we can define fitness functions that detect dependency violations in our application’s architecture.

Continue Reading
Architecture, Clean Code, Quality

Last week was a good week for the IT community in Iasi thanks to Codecamp – 2018 autumn edition. One of their masterclasses caught my eye – Crafting Code by Sandro Mancuso. I have been following Sandro‘s work for a while now, so this was a great opportunity for me to put the theory into practice . This blog post contains some of the things I’ve learned during the training.

This was a 2 day, hands-on course, focused on TDD, using mocks as a design tool through Outside-In TDD and working with Legacy Code. All exercises required pairing, which was a good opportunity to meet and learn from other people.

TDD

The focus of the first day was to learn the basics of TDD. Here are some of the highlights:

  • Think of tests as specifications for the unit under test.
  • How to name a test. Always try to make your code read well in English. If you’re testing an Account class, name the test class AccountShould. Then each test should continue from there – e.g.: Increase_Current_Balance_When_Making_A_Deposit. This reads nicely, contains terms used by the business (ubiquitous language), and specifies clearly what the test does.
  • The order in which to write the Given, When, Then is important. Start with Then, since this should be obvious from the test name. Then write the When and the Given. Implementing the steps in this order will keep the test focused and ensure we’re not doing too much in the Given step.
  • If the test that you’ve just written goes immediately to Green, then maybe the previous test took too big of a leap. TDD is about Red, Green, Refactor, not Red, Green, Green,…Green, Big Refactor.
  • Do not treat exceptional cases and the happy path at the same time. First flesh out the happy path, then add edge cases. This will usually get you to the solution faster.
  • Try to avoid the False Sense of Progress – writing lots of tests that pass quickly without helping you identify the solution. You should write the smallest test that points you in the right direction (i.e. the solution).
  • How to test a method that returns void – look for side effects without breaking encapsulation
  • Don’t believe the single assert myth. A test should contain a single logical assert. We can have more than one assert statements in a test. But they need to be logically grouped together.

After that, we focused on the two main styles of TDD, classicist and outside-in. (Sandro also mentioned a more extreme style – TDD as if you meant it. If you want to check it out have a look at Adrian Bolboaca‘s blog)

Classicist (Chicago school)

  • This is a good way to test drive an algorithm, data manipulation or conversion, when you know the inputs and outputs, but you don’t know anything about the implementation.
  • The design happens in the Refactor step. Because of this, it can be harder to get to a good design if the unit under test touches many domains (e.g Payment, Shipping).
  • Use the transformation priority premise to get from Red to Green. This can help you avoid writing test code that duplicates production code.
  • As the tests get more specific, the code gets more generic. So look for ways to move data out of the algorithm.
  • You cannot refactor a switch cases step by step. You need to rewrite the whole thing. So try to avoid them when test driving an algorithm.
  • Recommend book: Test Driven Development: By Example by Kent Beck

Outside-In (London school)

  • Use this when you have an idea about the implementation and the internals of the unit under test.
  • Use mocks as a design tool. Mocks get a bad name because many people misuse them. They can be a powerful tool when they are used correctly.
  • Most use cases don’t require strict mocking. Some really high risk apps (for health care, rockets, nuclear plants) might benefit from it.
  • Don’t mock private methods, even if the framework allows it. Even though you would write more tests, it would not lead to a better design.
  • Don’t use Argument.Any when verifying method calls. The arguments are part of the contract, so they should be checked.
  • Recommended book: Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce.

Using Outside-In TDD to implement a business feature

We started the second day with an ATDD exercise. Sandro took this opportunity to talk about Outside-In Design:

Architecture vs. Design 

  • Architecture – These are the systems that are part of the product and the way they interact. Each one should be treated as a black box. Simon Brown‘s container view (part of the C4 model) came to mind.
  • Macro Design – the architecture of each system. This is where you choose MVC, layers, ports and adapters, clean architecture (Simon Brown has an interesting post on the different styles).
  • Micro Design – how classes collaborate, what modules do you need?

When practicing Outside-In TDD, it is recommended to think about the application’s architecture and macro design beforehand. Than you can use TDD to drive the micro design. When you start thinking of how to make the first Acceptance Test pass, you’ll need to make lots of design decisions, before writing any code.

Test Types

There are a lot of conflicting definitions for test types. What’s important is for your team to know exactly what you mean when you say, for example, Integration Test or Component Test. Sandro briefly described a potential test classification:

  • Acceptance Test – to test a behavior of the system. The entry point is usually the Application Service (from DDD, Use Case in Clean Architecture or Action in Interaction-Driven Development). External dependencies (e.g. Databases) can be mocked (white box testing) or we could use the real implementation (black box testing)
  • Unit test – the unit under test is a single class or a small group of classes
  • Component Test – the unit under test is the Domain Model
  • Feature Test – the unit under test is the Application Service  and the Domain Model
  • Integration Test – testing classes at the system boundaries (e.g. testing the SQL implementation of a Repository)
  • User Journey Test (the unit under test is the UI and the backend is mocked)

You start with an Acceptance Test, then move to the other test types, as needed, while mocking collaborators.

Testing and Refactoring Legacy Code

This is the part that really impressed many of us in the audience. I’ve seen Sandro’s session on Testing and Refactoring Legacy Code in 2013, but I enjoyed seeing it live. This is one of the most useful presentation I’ve seen because it was immediately applicable to the work I was doing. It also led me to Michael Feathers‘ Working Effectively with Legacy Code. If you’re working with legacy code, you need to read this book. It will help you when you get stuck.

Some tips from the session:

  • Use Dependency Breaking techniques (e.g. Subclass and override method) in order to write tests for legacy code.
  • Test from the shallowest branch, since it contains the lowest number of dependencies.
  • Refactor from the deepest branch.
  • Use Test Data Builders  to make tests more readable.
  • Use Guard Clauses to make the happy path more visible.
  • Use the Balanced Abstraction Principle to make sure that everything in a method is at the same level of abstraction. Public methods should tell a story.

Conclusion

As I said, I was aware of Sandro’s work. Things made sense while reading the blog posts but only “clicked” during the course. This is because the course relied on coding exercises, pairing and on Sandro critiquing our code (which he did a lot!). And we all know that there is no learning without experimentation and playing around.

At the end of the course, my only complaint was about the fact that it was ending when we started to delve deeper into more advanced topics: design and architecture. Fortunately there is a another course that tackles these subjects – Crafted Design. So hopefully I’ll attend that one soon!

In conclusion, this was the best training I’ve attended. Sandro’s passion and experience were visible from the get go. The advice was pragmatic. The discussion about different options he considered while designing also gave us a glimpse into his train of thought. It was great to have the opportunity to learn from a software craftsman.  And, as a bonus, we also talked a bit about BDD and DDD, which helped me confirm some of my ideas and see other things in a new light.

So don’t miss the chance to attend this course!

Books, Quality

In the last couple of months I’ve been learning about what information can I extract from a codebase. I’ve written some articles on how to use NDepend to extract a static view of the system’s quality. But this view is based only on the current state of the codebase. What about source code history? What can it tell us? How has the code changed? These are exactly the kind of questions that Adam Thornhill‘s book, Your Code as a Crime Scene: Use Forensic Techniques to Arrest Defects, Bottlenecks, and Bad Design in Your Programs, tries to answer.

Continue Reading

Quality

The Overview Pyramid describes and characterizes the structure of an object oriented system by looking at three main areas: size and complexity, coupling and inheritance. This visualization technique has been defined by Radu Marinescu and Michele Lanza in their book Object-Oriented Metrics in Practice. In this blog post we’ll see how to compute all the necessary metrics by using NDepend.

The Overview Pyramid

Example Overview Pyramid
source: https://commons.wikimedia.org/wiki/File:Overview_pyramid.jpg

The purpose of the Overview Pyramid is to provide a high level overview of any object oriented system. It does this by putting in one place some of the most important measurements about a software system. The left part describes the Size and Complexity. The right part describes the System Coupling. The top part describes the inheritance usage.

Continue Reading

Books, Quality

I have been using static code analyzers for a while now. While these are useful, you need to spend a lot of time analyzing warnings and issues. And the problem is that, after you first run one of the static code analysis tools on a legacy project, you are overwhelmed by the number of issues. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, shows us how to use metrics effectively. It shows how to combine metrics in order to spot design flaws. This book also presents some novel visualization techniques. These are a great way to understand and visualize a complex system.

Object-Oriented Metrics in Practice

Continue Reading

Quality

Lines of code, cyclomatic complexity, coupling, cohesion, code coverage. You’ve probably heard about these metrics before. But do you actively track them? Should you? Visual Studio computes some of these metrics out of the box. But if you want to define a custom metric, you’re out of luck. Yet, there are a bunch of code metrics that you might find useful for your code base. More so, a composite metric might be more helpful than the sum of its parts. For example, the C.R.A.P. Metric detects complex code that is not covered by unit tests. How can you track such a metric in Visual Studio? In this article we’ll see how to visualize code metrics, add custom metrics and how to monitor trends with NDepend.

Code Metrics

NDepend computes many metrics out of the box. You can use the intellisense support to discover the standard metrics for a given code element:

Computer Code Metrics

But it would be hard to extract information from these metrics if all we got was a bunch of numbers. We need other techniques to help us break down the complexity of the data. Visualization techniques complement metrics, by making it easier to synthesize and digest this information.

Continue Reading

Quality

Your code base has a lot to tell you. The question is: How can you listen to it? You can identify code smells when you’re reading code or extending it, but this doesn’t give you an overview. After you have been working for a while in a project, you can name some of its strengths and weaknesses. But this approach takes a long time, relies on experience and is subjective. It would be nice if you could query a code base in a structured way. Basic search functionality from an IDE like Visual Studio isn’t powerful enough. NDepend allows you to query .Net code using LINQ syntax through CQLinq – Code Query LINQ. In this blog post we’ll discuss how to query your code base using CQLinq.

An Example

Since I like to learn from examples, let’s first see a simple, yet very powerful query. The following code detects classes that are candidates to be turned into structures. This is one of the default CQLinq rules.

from t in JustMyCode.Types where
  t.IsClass &&
 !t.IsGeneratedByCompiler &&
 !t.IsStatic &&
  t.SizeOfInst > 0 &&

  // Structure instance must not be too big,
  // else it degrades performance.
  t.SizeOfInst <= 16 &&
  // Must not have children
  t.NbChildren == 0 &&

  // Must not implement interfaces to avoid boxing mismatch
  // when structures implements interfaces.
  t.InterfacesImplemented.Count() == 0 &&

  // Must derive directly from System.Object
  t.DepthOfDeriveFrom("System.Object".AllowNoMatch()) == 1 &&

  // Structures should be immutable type.
  t.IsImmutable
select new { t, t.SizeOfInst, t.InstanceFields }

As you can see, the query is quite readable. Even if you don’t know the CQLinq syntax, you understand what it does. And since it’s based on Linq, it already seems familiar.

Continue Reading