Quality

Testing is an important part of building a product right. Continuous Delivery makes that more explicit by building quality in. In this blog post we’ll see how you can start off testing on the wrong foot. Then we’ll see how asking basic questions like Why, What, How and Where can help you define a sound test strategy in a Continuous Delivery context.

The Deployment Pipeline

Most of the teams nowadays think about Continuous Delivery. Continuous Delivery means automating the release process, from code merge to production release. How do you do that? By using the deployment pipeline pattern. The deployment pipeline models and automates the release process. Here is an example:

A deployment pipeline example
A deployment pipeline example
Continue Reading
Architecture, Quality

I think that everybody agrees that testing is required in order to build a quality product. But there’s also a lot of confusion about the boundaries of each test type. What’s the scope of a unit test? What’s the difference between an integration test, an integrated test and a contract test? If you ask 3 developers about test boundaries, you’ll most likely get 3 different answers. For example, I still talk to people who consider that a unit test should test a single class/method.

What’s clear is that most teams don’t have a consensus on what’s the scope of the different types of automated tests and the differences between them. Getting to a universal consensus might be hard, but getting to a consensus inside the team should be easy enough. In this blog post we’ll see an example of how to do that.

Continue Reading
Process

Have you ever been on a project where, because of the tight schedule or tight budget, the focus was only on delivering business stories? How did that work for you? Did you ever manage to pay all the technical debt incurred? What about the process debt?

I think that in most cases, this is a false economy. We’re gaining a small benefit now (maybe not event that), but we’re paying a much larger cost in the future. This is because, as Mike Rother says, a process that does not improve, degrades over time. For example, if you’re not continuously improving the feedback through the deployment pipeline, your 20 minute test suite will grow into an 1 hour test suite. If you’re not constantly fixing brittle tests, people will get used to ignoring them. This is not a people problem. It’s a system problem. It’s much harder to make people do something. It’s easier to put the required controls in the process. As an example, fail the build if the tests take longer than 30 minutes.

Continue Reading

Architecture, Quality

We have all used code analysis tools on our projects and these are useful for identifying some code smells. The issue is that most of them treat metrics in isolation and isolated metrics can’t tell you if the design is good or bad. You need more context.

In this blog post we’ll see how to go beyond code smells. We’ll see how to identify design smells and inappropriate coupling in the technical architecture. We’ll define detection strategies for common design smells (like God Class and Feature Envy) and implement them using NDepend. Last but not least, we’ll see how we can define fitness functions that detect dependency violations in our application’s architecture.

Continue Reading
Architecture, Clean Code, Quality

Last week was a good week for the IT community in Iasi thanks to Codecamp – 2018 autumn edition. One of their masterclasses caught my eye – Crafting Code by Sandro Mancuso. I have been following Sandro‘s work for a while now, so this was a great opportunity for me to put the theory into practice . This blog post contains some of the things I’ve learned during the training.

This was a 2 day, hands-on course, focused on TDD, using mocks as a design tool through Outside-In TDD and working with Legacy Code. All exercises required pairing, which was a good opportunity to meet and learn from other people.

TDD

The focus of the first day was to learn the basics of TDD. Here are some of the highlights:

  • Think of tests as specifications for the unit under test.
  • How to name a test. Always try to make your code read well in English. If you’re testing an Account class, name the test class AccountShould. Then each test should continue from there – e.g.: Increase_Current_Balance_When_Making_A_Deposit. This reads nicely, contains terms used by the business (ubiquitous language), and specifies clearly what the test does.
  • The order in which to write the Given, When, Then is important. Start with Then, since this should be obvious from the test name. Then write the When and the Given. Implementing the steps in this order will keep the test focused and ensure we’re not doing too much in the Given step.
  • If the test that you’ve just written goes immediately to Green, then maybe the previous test took too big of a leap. TDD is about Red, Green, Refactor, not Red, Green, Green,…Green, Big Refactor.
  • Do not treat exceptional cases and the happy path at the same time. First flesh out the happy path, then add edge cases. This will usually get you to the solution faster.
  • Try to avoid the False Sense of Progress – writing lots of tests that pass quickly without helping you identify the solution. You should write the smallest test that points you in the right direction (i.e. the solution).
  • How to test a method that returns void – look for side effects without breaking encapsulation
  • Don’t believe the single assert myth. A test should contain a single logical assert. We can have more than one assert statements in a test. But they need to be logically grouped together.

After that, we focused on the two main styles of TDD, classicist and outside-in. (Sandro also mentioned a more extreme style – TDD as if you meant it. If you want to check it out have a look at Adrian Bolboaca‘s blog)

Classicist (Chicago school)

  • This is a good way to test drive an algorithm, data manipulation or conversion, when you know the inputs and outputs, but you don’t know anything about the implementation.
  • The design happens in the Refactor step. Because of this, it can be harder to get to a good design if the unit under test touches many domains (e.g Payment, Shipping).
  • Use the transformation priority premise to get from Red to Green. This can help you avoid writing test code that duplicates production code.
  • As the tests get more specific, the code gets more generic. So look for ways to move data out of the algorithm.
  • You cannot refactor a switch cases step by step. You need to rewrite the whole thing. So try to avoid them when test driving an algorithm.
  • Recommend book: Test Driven Development: By Example by Kent Beck

Outside-In (London school)

  • Use this when you have an idea about the implementation and the internals of the unit under test.
  • Use mocks as a design tool. Mocks get a bad name because many people misuse them. They can be a powerful tool when they are used correctly.
  • Most use cases don’t require strict mocking. Some really high risk apps (for health care, rockets, nuclear plants) might benefit from it.
  • Don’t mock private methods, even if the framework allows it. Even though you would write more tests, it would not lead to a better design.
  • Don’t use Argument.Any when verifying method calls. The arguments are part of the contract, so they should be checked.
  • Recommended book: Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce.

Using Outside-In TDD to implement a business feature

We started the second day with an ATDD exercise. Sandro took this opportunity to talk about Outside-In Design:

Architecture vs. Design 

  • Architecture – These are the systems that are part of the product and the way they interact. Each one should be treated as a black box. Simon Brown‘s container view (part of the C4 model) came to mind.
  • Macro Design – the architecture of each system. This is where you choose MVC, layers, ports and adapters, clean architecture (Simon Brown has an interesting post on the different styles).
  • Micro Design – how classes collaborate, what modules do you need?

When practicing Outside-In TDD, it is recommended to think about the application’s architecture and macro design beforehand. Than you can use TDD to drive the micro design. When you start thinking of how to make the first Acceptance Test pass, you’ll need to make lots of design decisions, before writing any code.

Test Types

There are a lot of conflicting definitions for test types. What’s important is for your team to know exactly what you mean when you say, for example, Integration Test or Component Test. Sandro briefly described a potential test classification:

  • Acceptance Test – to test a behavior of the system. The entry point is usually the Application Service (from DDD, Use Case in Clean Architecture or Action in Interaction-Driven Development). External dependencies (e.g. Databases) can be mocked (white box testing) or we could use the real implementation (black box testing)
  • Unit test – the unit under test is a single class or a small group of classes
  • Component Test – the unit under test is the Domain Model
  • Feature Test – the unit under test is the Application Service  and the Domain Model
  • Integration Test – testing classes at the system boundaries (e.g. testing the SQL implementation of a Repository)
  • User Journey Test (the unit under test is the UI and the backend is mocked)

You start with an Acceptance Test, then move to the other test types, as needed, while mocking collaborators.

Testing and Refactoring Legacy Code

This is the part that really impressed many of us in the audience. I’ve seen Sandro’s session on Testing and Refactoring Legacy Code in 2013, but I enjoyed seeing it live. This is one of the most useful presentation I’ve seen because it was immediately applicable to the work I was doing. It also led me to Michael Feathers‘ Working Effectively with Legacy Code. If you’re working with legacy code, you need to read this book. It will help you when you get stuck.

Some tips from the session:

  • Use Dependency Breaking techniques (e.g. Subclass and override method) in order to write tests for legacy code.
  • Test from the shallowest branch, since it contains the lowest number of dependencies.
  • Refactor from the deepest branch.
  • Use Test Data Builders  to make tests more readable.
  • Use Guard Clauses to make the happy path more visible.
  • Use the Balanced Abstraction Principle to make sure that everything in a method is at the same level of abstraction. Public methods should tell a story.

Conclusion

As I said, I was aware of Sandro’s work. Things made sense while reading the blog posts but only “clicked” during the course. This is because the course relied on coding exercises, pairing and on Sandro critiquing our code (which he did a lot!). And we all know that there is no learning without experimentation and playing around.

At the end of the course, my only complaint was about the fact that it was ending when we started to delve deeper into more advanced topics: design and architecture. Fortunately there is a another course that tackles these subjects – Crafted Design. So hopefully I’ll attend that one soon!

In conclusion, this was the best training I’ve attended. Sandro’s passion and experience were visible from the get go. The advice was pragmatic. The discussion about different options he considered while designing also gave us a glimpse into his train of thought. It was great to have the opportunity to learn from a software craftsman.  And, as a bonus, we also talked a bit about BDD and DDD, which helped me confirm some of my ideas and see other things in a new light.

So don’t miss the chance to attend this course!

Books, Quality

In the last couple of months I’ve been learning about what information can I extract from a codebase. I’ve written some articles on how to use NDepend to extract a static view of the system’s quality. But this view is based only on the current state of the codebase. What about source code history? What can it tell us? How has the code changed? These are exactly the kind of questions that Adam Thornhill‘s book, Your Code as a Crime Scene: Use Forensic Techniques to Arrest Defects, Bottlenecks, and Bad Design in Your Programs, tries to answer.

Continue Reading

Books, Quality

I have been using static code analyzers for a while now. While these are useful, you need to spend a lot of time analyzing warnings and issues. And the problem is that, after you first run one of the static code analysis tools on a legacy project, you are overwhelmed by the number of issues. Object-Oriented Metrics in Practice, by Michele Lanza and Radu Marinescu, shows us how to use metrics effectively. It shows how to combine metrics in order to spot design flaws. This book also presents some novel visualization techniques. These are a great way to understand and visualize a complex system.

Object-Oriented Metrics in Practice

Continue Reading

Quality

In a previous blog post we discussed why building the right product is hard and some tips on how to achieve a high perceived integrity. But if you’re building a strategic solution that should support your business for many years, this is not enough. With time, new requirements get added, features change and team members might leave the project. This, together with hard deadlines, means that technical debt starts to incur, and the price of adding new features increases until someone says it will be easier to rebuild the whole thing from scratch. This isn’t a situation you’d like to be in, so that’s why it is important to build the product right.

Building the product right

In their book, Mary and Tom Poppendieck define this dimension of quality as the conceptual integrity of a product. Conceptual (internal) integrity means that the system’s central concepts work together as a smooth, cohesive whole.

How can you maintain the conceptual integrity of a product during its lifetime? You rely on communication, short feedback loops, transparency and empowered teams. These are the same principles that can lead to a high perceived integrity. The only difference is that you apply them at an architectural and code level. Continue Reading

Quality

If you ask a hundred developers to define software quality, you’ll probably get a hundred different answers. There are a lot of ways to categorize quality, but one that I find most useful is building the right product and building the product right.

Building the Right Product

First we have to make sure we are building the right product. The most performant and secure product, having the cleanest and most extensible architecture, covered with unit tests and acceptance tests is in vain if nobody uses it.

In their book, Lean Software Development: An Agile Toolkit, Mary and Tom Poppendieck define this dimension of quality as the perceived integrity of a product. Perceived (external) integrity means the totality of the product achieves a balance of function, usability, reliability, and economy that delights customers.

Traditionally, when customers want to build a product, they talk with business analysts and write down the requirements. These documents are then handed over to architects, who then define the high level architecture and pass the design documents down to programmers who start implementing. There’s a gap between each step and as we go through the process, we lose more and more information and our chances of building the right product get slimmer.

Continue Reading

Books

Writing good tests is hard. Writing good specification is even harder. On my current project we treat test code with the same care we treat production code (which should be the norm on all projects), but we could still improve the readability, reliability and maintainability of our test suite.

With this in mind, Fifty Quick Ideas to Improve Your Tests by Gojko Adzic, David Evans and Tom Roden was the perfect choice for our book reading club. I’ve previously read Gojko’s Specification by Example, which really helped me better understand BDD and how to use it in practice, so I had high hopes for this book.

50 quick ideas to improve your tests

Continue Reading