In the previous post (The ‘why’ of writing tests) of this series I talked about my motivation behind creating automated tests and how I learned to appreciate them as a productivity tool. That explained why I write them.
Today I want to share what I cover with tests.
Let me briefly summarize the last article.
Automated tests are best at catching regressions, preventing you from introducing them into the product. Tests are useful for a couple more things but their lifetime value lies in their ability to “freeze” the good state of the code.
I also want to give you an understanding of what we’re dealing with at Base, so that my later opinion about tests is set in a good context.
As a company we have a clear goal and to achieve it we need to move fast. When you want to become a leader in a competitive market, you can’t afford to slow down. Catching up will cost too much.
Everybody has the same amount of time. A whole day, each day. If you want to be fast, you simply can’t afford to waste your day. Our idea of not wasting it is thoughtful execution. We could do hundreds of things. But it is essential that we focus only on the most important ones.
So here’s the mindset I aim to keep. When I work on a project, I’m confident this project is the most important thing I can work on at the moment. Otherwise I should be doing something different.
This is my advice on what should be tested: test everything.
You may think I don’t mean it, or I don’t know what I’m talking about, or that’s impossible to test everything, but, well… I mean exactly that. Test everything. And here is why.
I think that if we decide to add anything to the product, it is important. We have no time to work on anything else. If it’s important, it must work. If it must work, it deserves a test.
The more specific or minor thing it is, the more important it is to have a test, since it’s less likely it will be included in a manual regression.
The kind of a feature that can be left without a test is a feature that won’t impact users when broken. But this also means it wasn’t important and it wasn’t needed in the first place. If you feel like you can commit it without a test, I think you might have wasted time writing it in the first place.
If we really believe we’re focusing on the right things and make features that impact people’s work, then we should test them. It may be a small detail, but it doesn’t matter because it has a huge impact.
For example, let’s consider a tooltip. Imagine a button labeled “Export”. When hovered, a tooltip should appear, explaining what that button does. Adding such tooltip probably requires one or two trivial lines of code that simply can’t break. Would you add an automated test for that? Probably not.
Today I say that you should. Because it’s not just a tooltip. If you are adding tooltips instead of building this new awesome Thing X that’s next in the roadmap, it means these tooltips are important. Maybe people find the button confusing and complain. Maybe this single tooltip will save several hours of work in the Customers Support team and also reduce churn. If you work on it, it’s not just a tooltip. It’s a change that brings value to your customers. Would you add a test now?
Ok. I confess. I don’t really test everything. There is a big category of things I don’t usually write tests for.
I think you can skip tests of major features.
Sure, it makes sense to write them, for the sake of the many benefits of TDD. But from the regression’s point of view, they’re pretty useless.
Yes. Test tooltips, but don’t waste time to test your login screen. Even though virtually every tutorial of a Selenium-based tool I’ve seen shows an example of testing the login form, I consider it a waste of time.
Major features are major for a reason. They are heavily used. They are remembered by every QA in your company. They are on the list of manual regression tests. They are checked by brief “smoke” tests after deployments.
You don’t need a test to verify if you can login to your application. If somebody breaks that, everybody will notice immediately.
Another reason to test these small details is the long-term quality of your application.
When you build a productivity tool for people, these people have something to do when they interact with your product. When they notice a flaw in a major feature, they’ll tell you about it. They’re paying good money for the product and due to the bug, they can’t do their work. You’ll hear about it very quickly.
It’s very unlikely that you’ll accumulate a lot of critical bugs over time. You’ll be forced to fix them promptly.
But when the same people notice a small annoying glitch that does not prevent them from working, your chances of receiving a bug report about it drop drastically. Reporting a bug, even when it takes just a few seconds, still interrupts them, and many people just learn to ignore the issue.
If you accumulate many small issues, even though they don’t prevent anybody from doing their work, over time your product starts to smell. Using it just doesn’t feel like fun any more. People no longer recommend it to a friend, adoption drops, churn increases. And instead of fixing the bugs, you spend time building analytics tools, trying to understand what’s going on, since nobody complains.
So again, if you implement some “little big detail” you need a test. Because even users won’t complain when it’s missing.
Being fast for a month is easy. You can ditch many good practices, skip automation, avoid planning, ignore your architecture, and so on. You can really ship a lot in a month, if that’s your only goal.
But staying fast for ten years is a different game. To maintain high speed and constantly deliver five star quality product, you need to “slow down” and do everything right. In the long run you save time by not wasting it on things like bugs that could have been avoided.
Do things with the best quality you can, speed will come with practice.