We're building our MVP and have got commitments from our first few customers. I'm still young and haven't managed production software before, only personal projects.
I'm hesitant to sacrifice developer velocity for tests. I believe product agility is a huge asset early on, and I think testing cuts into this.
I'm also afraid of digging an inescapable hole of untestable spaghetti code and tech debt.
I'd love to hear the thoughts of more experienced engineers and founders on this issue! How did you balance it, and what would you have done differently? When should I start testing?
My current idea is to skip testing, and instead put that effort into product development and a robust deployment pipeline with easy roll-backs. Ideally this would allow us to move quickly and revert any big mistakes, without the burden of a complex test suite.
If it's important, we're React+Go, but I'm not looking for stack-specific advice.
Pros
* Established the proper culture from day one. If you end up being successful, this is extremely valuable. Culture is very, very hard to change.
* Eliminated a lot of bugs and regressions.
* Gave us confidence working in a regulated space.
Cons
* Adds a lot of time to initial development
* Product complexity doesn't really require testing in the early stages.
* Slows down ability to pivot
-----
If I were to do it again. I'd write fewer tests, but establish milestones for requiring changing/additions to be tested.
* My rule of thumb would be: If you can't easily test it in the UI, write a test. Or, if it's business critical, write a test.
* Verify authorization and authentication. It's very important.
* Test anything that makes/loses money (for example, Strip, Recurly, etc).
* Jest snapshots are quick and easy. Use them.
* Write tests for utility functions
Don't test:
* Database migrations
* UI interactions (unless critical). They take a lot of time to write and UI changes frequently.
* Vendor/3rd party API integrations beyond surface level. In my experience, quick implementations always involve a bunch of mocking/stubbing - making these types of tests less useful.
Tests are about system integrity. If you risk losing something due to lack of integrity, tests are worth your while.
Think of it like health insurance: When are you comfortable forgoing health insurance, and when are you not?
You don’t have to write thousands of unit tests. But you should absolutely have a basic test suite with integration tests that give you confidence in a new release. And it will make it easier to hire developers in the future.
A good test suite doesn’t slow you down at all, in fact, it speeds you up by helping you identify problems rapidly so you can fix code before it becomes a real problem.
When I started my own product for Ruby & JS developers https://knapsackpro.com I was doing mostly unit testing and later relayed on E2E tests for testing user dashboard to ensure happy paths are covered.
It's very easy to introduce weird bugs and it's much faster to run CI build to test your app than manually verify if it works correctly in your browser when you make some changes.
Testing the product with end-to-end tests will stop you from shipping a completely broken product to your customer, where the most damage can occur. Add the test script at the end of the build and alert to a dashboard. “Alert” can mean email, and “dashboard” can mean “PC showing inbox for testalerts@mycompany”. You can get that running immediately. Alerts won’t tell you what broke of course, though you can use version control and bisecting to isolate breakages to commits, which is almost as good.
When you find bugs, then each bug is an opportunity to put some scaffolding around smaller subsystems as smaller tests to show the actual bug, as well as the fact that the bug has been fixed. Avoid committing the test without committing the fix. It’s a bad habit. You can show how the test used to fail in your commit message.
You’ll get an idea as to what are the most problematic pieces of your app. If one part is much less reliable than the others, then you can select that subsystem for verification with bottom up unit tests. Don’t spend too much time on each one though — it’s highly likely that in exercising your APIs through unit testing, you’re going to see ways in which they need to be refactored to make more sense and, importantly, become more reliable.
These don't take too much time to write and speed up your development significantly because you can make significant changes and be sure that certain flows still work without doing all the manual testing over and over.
How many engineers do you have and how fast are they likely to write code? It depends more on the product that you're building rather than the stack - if the tech is the product, spend more time on making sure it works as advertised. If the tech enables the product, spend less. Any competent engineer can pick up thousands of lines of technical debt/spaghetti code and spend a few afternoons swearing at it and figure out how to modify it. Technical debt is a problem, but more of a problem when you hit millions of lines, hundreds of engineers, and you need to be sure you can change stuff without breaking it.
Debt is useful - as long as you can take it on in a calculated way, don't stress about it too much. You might need to pivot and rewrite everything anyway, or at least have your fundamental assumptions changed in such a way that requires a partial rewrite. Tests won't help you much in those scenarios.
I wrote these tests after launching the product publicly and just before support requests / bug fixing period started with the simple aim of avoiding regressions as I work through the improvements.
If I were to give general suggestions on the topic,
- In most cases having high level sanity tests is low effort high value decision.
- To write tests or not shouldn't be a binary decision. E.g. If you think a specific aspect is going to need refinement work then tests for those modules can help you immensely.
- Going for high % coverage very early on for the sake of mental satisfaction or bragging rights is usually not a good time investment
My only regret is not having any automated tests for frontend. Investing in some basic selenium tests after launch would have been worth it.
Unfortunately, inexperience is going to make it take longer to write tests. That probably won't be what kills your startup though. Most companies die when they try to scale due to a variety of organizational issues, and having a few tests kicking around when you start bringing new developers in will probably help you a bit in this respect.
Only if your code is complex, will you need a complex test suite. If your code is complex you won't be fast implementing features anyways. TDD will help you keep your code simple.
One thing that has been touched upon is that you are setting the culture, if you say you’re hesitant to sacrifice velocity for tests already that will probably become the long term culture and one day there will be no tests and everyone will be afraid to make any changes. Tests do take up front investment but they run multiple times a day and thousands or even tens of thousands of times over the lifetime of a product and pay it back.
I think striking a balance where you’re not overly concerned about coverage but have key end-to-end flows covered can make things more efficient while saving your skin and help avoid unhappy situations. I say limited UI front-end tests (these are the greatest time sink) of key flows, but then as much coverage of apis as you can do.
Also I’m founder at Tesults (https://www.tesults.com). Check it out. There’s a free plan but I’ll go beyond that and give everyone who posted in this thread one full target free if you email me. So you can report your test results nicely too and keep track of failures without cost. Any direct feedback is appreciated.
I would write tests from the get go. I would have to look at the got history to be sure, but I believe I started adding to the test suite after about a month of adding code (I did use an existing open source project as the base of the product, and they had a lot of tests written). Your fears about spaghetti code are valid and tests will help with that. At the least you can refactor with less fear, though with golang that will be less of an issue than it was with ruby. In addition they will help prevent regressions. When I (or, far more likely, our users) found a bug in the system, I would write a test before fixing it. Therefore, we had very few bugs return.
However, a deployment pipeline is critical too, because otherwise your tests won't get run often enough (especially if they are slow. Due to the way I wrote the tests, and the number of them, it took approximately 20 minutes to run through all the tests in our ci environment).
Re: your fears of slowing things down. In my experience a test suite, even a bad one, will speed things up once you are in production. This is because it will prevent regressions and doing manual testing, both of which chew up additional time. I saw this at other companies.
You will want unit tests this way and to verify error handling and logic flow. You will want integration tests or acceptance tests to verify things work together and can perform the actions your users will take. These pay dividends and prevent the manual need to verify you did not break anything on a given change.
You also don't need to exhaustedly test everything. Start with critical parts. And, like others said, add to your test suite any bugs to prevent regressions. A paying customer who reports a bug, sees it fixed, and sees it come back is not a happy camper.
- Don't spend time in writing unit tests until
- You have paying customers and a steady revenue
I disagree that testing cuts into agility.
I believe that in this case "agility" means that you can change your product in significant ways quickly. Presumably, when you make these changes, you need it to still work. Having tests already in existence will inform you when you've broken something you didn't mean to change. If you don't have tests that can run frequently on your swiftly changing, rapidly pivoting codebase, you will break things and you won't even know until it takes off someone's foot.
If you don't have these tests, you can't rapidly change your product with confidence that it still works, and I suspect that you would value "working" above "not working" very very highly.
If you want agility, you need tests. Lots of automated tests. Testing doesn't cut into your agility; testing gives you agility.
It will email you a stack trace whenever an error occurs and also keeps a log of them which you can order by frequency and so on. It's an extremely useful service.
When you get bigger and have regular users and need stability then tests start to become important so that you don't break existing things when rolling out new features.
Not having tests comes with a risk though. For example, I once broke our registration flow and didn't know about it for a week. Flipside is we hardly had any users registering at that point. So it's probably a good idea to cover a few of your core user journeys. Hope that helps!
For example, code that parses data generally has lots of tests. It's usually easy to write unit tests for parsers, and it's hard to write parsing code correctly. These tests are invaluable and catch lots of errors, especially when you add features to the parser, or refactor the code, or optimize it. I would not touch any parsing code that doesn't have tests, because no matter how smart you think you are, you are going to break something everytime you make a change.
We also have tests for code that interfaces with 3rd party components. When those components change behavior, we can detect problems early.
We don't test most UI code, since tests are often difficult to write, need to be updated every time we make changes to the UI, and they catch few bugs.
I do not believe that to be the case. I find the biggest value of tests is being able to change the code more quickly. Running tests is MUCH faster than a regression script.
There is a scale where you'll never have to change the code enough for the tests to pay off. Personally, I think that scale is quite small, maybe a week of development work.
However, it takes some experience to get good enough at the tests that you feel that benefit. If you're not there, the answer is probably different. However, if your startup scale is non trivial, I'd bet that the payoff if learning that skill and building some testing framework from the get go would be worth it.
You will probably find out that you need a bunch of tools to help you exploring the product, so why don't you spend some time turning those tools into crude test automation ?
Remember that the best test is the test that actually runs, an unwritten test doesn't really help you.
Obviously, if you have a clear understanding of the finished product like in many enterprise applications then testing makes sense.
I would however, strongly recommend it if you are doing nuclear reactor software ;-)
Hence you should write tests only when you want to freeze your design in a specific area. I.e. make sure that you got the domain model right, and freeze it via tests, etc.
I found this book very valuable:
if you are worried about tech debt you should watch "The art of destroying software". It will be hard to understand at first.
Here is TL;DR. Writing small services that will take only one week of work to rewrite. That way you can burn your services and recreate when you truly understand your domain.
Recently I worked on a REST API using go. I think the go package system has good support for writing small and independent packages. Just make sure your microservices are micro.
You can do the same for your React Codebase.
Unit tests are important tests and are very useful for engineers. As others mentioned, thinking about test-ability keeps code simple, which is a must when you want to move fast. Test-ability (have the TDD mindset and thinking, if you don't then apply TDD until you think tests while writing your code) usually translates to readable and maintainable code. When you write good unit tests, then most assumptions are documented in an automated way and tested at every execution of the unit tests. When a unit test fails, then the error message and the test name should tell you precisely the assumption and what is broken, that ultimately makes you really fast in maintaining and (with a good overall design) changing code.
When you want to be really fast then have fewer or no integration tests. That is because integration tests usually span over several parts of your software. So a change in any part will lead to probably several changes in the corresponding integration tests. That makes it more expensive to change things fast. On top of that, many integration tests are more complex to write because of the work in mocking adjacent parts of the software.
System/Black box tests. Those test are very important for evaluating business value. One tests common happy paths from your customers and validate that business value/ feature x still works. Again like with the system tests, the process of doing, deciding and prioritising will lead to clear business values and very important a better shared vision between the developers and business. The software serves the purpose to be used by a customer, the system tests are ultimately a distilled version of what you the business is selling and it's an automated process to ensure that this value is delivered.
Even the best software developers do mistakes and forget. I have not yet seen a team which profits from not writing tests. It takes some time to write good software -> testing and approaches like TDD are some of the ways to start climbing that hill.
About your questions:
A robust deployment pipeline is the right way to go and can be even better leveraged with tests. Example: you can connect your system tests to the roll-back functionality and re-deploy the last working state.
You are right about your worries about Spaghetti code and technical dept, from my experience they are always an issue from early on. But also keep in mind that no one is perfect so even if it's only you writing code, you will find code written from you, which gives you goose bumps. A nice approach to code quality is the in-place refactoring, that is you don't do re-factoring as a task. Each time you touch code and it's not very clear or the structure/ design is not ideal then you re-factor with each and every task (piece by piece). A rule of thumb: When a 1 day tasks adds 3 days of re-factoring (only fixing the touched code), then you know that you are sliding the slope of lower productivity and you need to act and going back up the slope usually takes a lot of discipline in the team (that's an extreme example, personally I act much earlier).
Your current idea has some risks associated with it. I like to expand with an example: Usually a code base changes and grows with new features, rarely it gets smaller. When your code base grows and your team grows -> productivity declines (people: communication overhead, technology: overall more complex/bigger system, probably many more factors). In the first weeks you might deliver very fast without tests but in the future that is not given. Let's assume that you and your team are unicorns and will produce perfect code all the time. Your productivity declines naturally by having a bigger system and more people. In the case of declining productivity it is very unusual to add workload like writing tests. It might be easier in your project/startup, but usually it's hard to add more workload. The future business will probably be under constant pressure to deliver fast and follow the market. Now back to the real world, you will probably face a more pronounced slowdown than in the unicorn case. Several risks are coming from your decision, if you think you can manage them then try and keep your eyes peeled and learn from every mistake along the way (btw that's always good advice for every part of your journey ;) ).
Tip use a tools like SonarQube (https://www.sonarqube.org/) to asses and measure obvious code quality.
Another point I like to share: Keep in mind that good tests don't stop bugs, there will be enough bugs even with good tests. Tests help developers/engineers to understand code and assumptions, they help to reduce bugs from being introduced, they help you doing the right design decisions and much more. You can even write strongly unreadable code with a very good rating from SonarQube. Maintainability, readability and change-ability of your code are human related problems and only slightly technology problems.
Software Quality often relates to tests, if you replaces tests with software quality in your question this article has some good talking points: https://martinfowler.com/articles/is-quality-worth-cost.html