So, in short, I view tests as a super useful, but over-applied tool. I want my tests to deliver high enough value to warrant their ongoing maintenance and costs. That means I don't write nearly as many tests as I used to (in my own projects), and far fewer than my peers.
Where I work, tests are practically mandated for everything, and a full CI run takes hours, even when distributed across 20 machines. Anecdotally, I've worked for companies that test super heavily, and I've worked for companies that had no automated tests at all. (They tested manually before releases.) The ratio of production issues across all of my jobs is roughly flat.
This issue tends to trigger people. It's like religion or global warming or any other hot-button issue. It would be interesting to try to come up with some statistical analysis of the costs / benefit of automated tests.
There are a couple of circumstances I often do, though.
The first is when fixing a bug - writing the (red) regression test first forces me to pin down the exact issue and adds confidence that my test works. Committing the red test and the fix to the test in two commits makes the bug and its fix easy to review.
The second is when I'm writing something high risk (particularly from a security standpoint). In this case I want to have a good idea of what I'm building before I start to make sure I've thought through all the cases, so there's less risk of rewriting all the tests later. There's also more benefit to having a thorough test suite, and I find doing that up front forces me to pin down all of the edge cases and think through all the implications before I get bogged down in the "how" too much.
- The uncomfortable truth for some is that not doing any testing at all can be a perfectly fine trade off and there's plenty of successful projects that do this.
- Sometimes the statically checked assertions from a strongly typed language are enough.
- Sometimes just integration tests are enough and unit tests aren't likely to catch many bugs.
- For others, going all the way to formal verification makes sense. This has several orders of magnitude higher correctness guarantees along with enormous time costs compared to TDD.
For example, the Linux kernel doesn't use exhaustive unit tests (as far as I know) let alone TDD, and the seL4 kernel has been formally verified, both having been successful in doing what they set out to do.
I notice nobody ever gets looked down on for not going the formal verification route - people need to acknowledge that automated testing takes time and that time could be spent on something else, so you have to weigh up the benefits. Exhaustive tests aren't free especially when you know for your specific project you're unlikely to reap much in the way of benefits long term and you have limited resources.
For example, you're probably (depending on the project) not going to benefit from exhaustive tests for an MVP when you're a solo developer, can keep most of the codebase in your head, the impact of live bugs isn't high, the chance of you building on the code later isn't high and you're likely to drastically change the architecture later.
Are there any statistics on how many developers use TDD? There's a lot of "no" answers in this thread but obviously that's anecdotal.
For new development, no.
I've found that unless I have a solid architecture already (such as in a mature product), I end up massively modifying, or even entirely rewriting most of my tests as development goes on, which is a waste of time. Or even worse, I end up avoiding modifications to the architecture because I dread the amount of test rewrites I'll have to do.
> Can you describe the practical benefit?
Confidence that the code I'm writing does what it's supposed to. With the added benefit that I can easily add more tests if I'm not confident about some behaviors of the feature or easily add a test when a bug shows up.
> Do you happen to rewrite the tests completely while doing the implementation?
Not completely, depends on how you write your tests, I'm not testing each function individually, I'm testing behaviour, so unless there's a big architectural change or we need to change something drastic, the tests have minimal changes
> When does this approach work for you and when did it fail you?
It works better on layered architectures, when you can easily just test the business logic independently of the framework/glue code. It has failed me for exploratory work, that's the one scenario where I just prefer to write code and manually test it, since I don't know what I want it to do...yet
Most of programming happens in the exploration phase. That's the real problem solving. You're just trying things and seeing if some api gives you what you want or works as you might expect. You have no idea which functions to call or what classes to use, etc.
If you write the tests before you do the exploration, you're saying you know what you're going to find in that exploration.
Nobody knows the future. You can waste a crazy amount of time pretending you do.
Tried it, ended up with too many tests. Quelle surprise. There is a time/money/cognitive cost to writing all those tests, they bring some benefit but usually not enough to cover the costs.
I'm also going off the 'architect everything into a million pieces to make unit testing "easier"' approach.
I heard someone saying that if you write a test and it never fails, you've wasted your time. I think thats quite an interesting viewpoint.
Reminded of:
"Do programmers have any specific superstitions?"
"Yeah, but we call them best practices."
I also believe that 100% test coverage (or numbers close to that) just isn't a useful goal, and is counterproductive from a maintenance perspective: test code is still code that has to be maintained in and of itself, and if it tests code that has a low risk of errors (or code where, if there are errors, those errors will bubble up to be caught by other kinds of testing), the ROI is too low for me.
After I've settled on interfaces and module boundaries, with a plausibly-working implementation, I'll start writing tests for the code with the highest risk of errors, and then work my way down as time permits. If I need to make large changes in code that doesn't yet have test coverage, and I'm worried about those changes causing regressions, I'll write some tests before making the changes.
TDD done the way many developers do is a PITA though. When I write a test it will start off life with zero mocking. I'll hit the db and live APIs. From here I'm iterating on making it work. I only introduce mocking/factories because it's harder work not to. I'll gradually add assertions as I get an idea about what behaviour I want to pin down.
Done this way using tests is just making life easier, you can start off testing huge chunks of code if that's what you're sketching out, then add more focused tests if that's a faster way to iterate on a particular piece. For me the process is all about faster feedback and getting the computer to automate as much of my workflow as possible.
edit: Kent Beck had a fantastic video series about working this way, I can only find the first 10 mins now unfortunately but it gives you a taste, https://www.youtube.com/watch?v=VVSSga1Olt8.
Can you describe the practical benefit? Say, a change is executed on one section of the (enterprise level)application. You missed addressing an associated section. This is easily identified as your test will FAIL. When the number of feature increases, the complexity of the application increases. Tests guide you. They help you to ship faster, as you don't need to manually test the whole application again. In manual testing, there are chances of missing out few cases. If it's automated, such cases are all executed. Moreover, in TDD - you only write code which is necessary to complete the feature. Personally, tests act as a (guided)document for the application.
Do you happen to rewrite the tests completely while doing the implementation? Yes, if the current tests doesn't align with the requirements.
When does this approach work for you and when did it fail you? WORK - I wouldn't call it a silver bullet. But I am really grateful/happy to be a developer following TDD. As the codebase increases, when new developers are brought in - TESTS is one of the metrics which helps me ship software. NOT WORK - a simple contact only based form(i.e. a fixed requirement having a name, email, textarea field and an upload file option), I rather test it manually than spend time writing tests
If you write your test after making the code changes, its easier to have a bug in your test that makes it pass for the wrong reasons. By writing the test first, and progressively, you can be sure that each thing it asserts fails properly if the new code you write doesn't do what is expected.
Sometimes I do write the code first, and then I just stash it and run the new tests to be sure the test fails correctly. Writing the test first is simply a quicker way to accomplish this.
Like others have said when there is a lot of new code - new architectural concerns etc, its not really worth it to write tests until you've sketched things out well-enough to know you aren't so likely to have major API changes. Still, there is another benefit to writing the tests - or at least defining the specs early on - which is that you are not as likely to forget testing a particular invariant. If you've at least got a test file open and can write a description of what the test will be, that can save you from missing an invariant.
Think of tests as insurance that someone working on the code later (including yourself, in the future) doesn't break an invariant because they do not know what they all are. Your tests both state that the invariant is intentional and necessary, and ensure it is not broken.
> Can you describe the practical benefit?
For a test case that produces a bug, you might find the bug manually. Getting that manual process into a test case is often a chore, but in doing so you'll better understand how the system with the bug failed. Did it call collaborators wrong? Did something unexpected get returned? Etc. In those cases, I think the benefit really is a better understanding of the system.
> Do you happen to rewrite the tests completely while doing the implementation?
A TDD practicioner will probably tell you taht you're doing it wrong if you do this. You write the minimum viable test that fails first. It might be something simple like "does the function/method exist". You add to your tests just in time to make the change in the real code.
Then as the project evolves, I start adding more high level tests to avoid regressions.
I prefer high level testing of products, they're more useful since you can use them for monitoring as well, if you do it right. I work with typed languages so there's little value in unit tests in most cases.
Sometimes I'll write a test suite "first", but then again only once I have at least written up a client to exercise the system. Which implies I probably decided to stabilize the API at that point.
Like others have said, tests often turn into a huge burden when you're trying to iterate on designs, so early tests tend to cause worse designs in my opinion, since they discourage architectural iterations.
Also always before a refactor. Document all the existing states and input and output and I can refactor ruthlessly, seeing as soon as I break something.
Tests are also great documentation for how I intend my api to be used. A bunch of examples with input, output, and all the possible exceptions. The first thing I look for when trying to understand a code base are the tests.
When do I not write tests? When I'm in the flow and want to continue cranking out code, especially code that is rapidly changing because as I write I'm re-thinking the solution. Tests will come shortly after I am happy with a first prototype in this case. And they will often inform me what I got wrong in terms of how I would like my api consumed.
When did it fail me? There are cases when it's really difficult to write tests. For example, Jest uses jsdom, which as an emulator has limitations. Sometimes it is worth it to work around these limitations, sometimes not.
Sometimes a dependency is very difficult to mock. And so it's not worth the effort to write the test.
Tests add value, but like anything that adds value, there is a cost and you have to sometimes take a step back and decide how much value you'll get and when the costs have exceeded the value and it's time to abandon that tool.
Once, I started with tests, but I had to rip up a lot along the way.
It is helpful to ensure testability early on. It might be easier for some devs to figure it out by actually coding up some tests early.
I won’t argue against anyone who is actually productive using hard-core TDD.
All that being said, I haven’t spent much time on teams with a particularly large group of people working in one project. I think the most has been 4 in one service. The more people working in a code base, the more utility you get from TDD, I believe. It’s just tough to have a solid grasp on everything when it changes rapidly.
Writing a test for something an MP3 ID tag parser is a good case for TDD with unit tests. It’s pretty clear what the interface is, you just need to get the right answer, and you end up with a true unit test.
Doing TDD with a large new greenfield project is harder. Unless you have a track record of getting architecture right first time, individual tests will have to be rewritten as you rethink your model, which wastes a lot of energy. Far better is to test right at the outermost boundary of your code that isn’t in-question: for example a command line invocation of your tool doing some real world example. These typically turn into integration or end to end tests.
I tend to then let unit tests appear in stable (stable as in the design has settled) code as they are needed. For example, a bug report would result in a unit test to exhibit the bug and to put a fixed point on neighboring code, and then in the same commit you can fix the bug. Now you have a unit test too.
One important point to add is that while I reserve the right to claim to be quite good at some parts of my career, I’m kind of a mediocre software engineer, and I think I’m ok with that. The times in my career when I’ve really gotten myself in a bind have been where I’ve assumed my initial design was the right one and built my architecture up piece by piece — with included unit tests — only to find that once I’d built and climbed my monumental construction, I realized all I really needed was a quick wooden ladder to get up to the next level which itself is loaded with all kinds of new problems I hadn’t even thought of.
If you solve each level of a problem by building a beautiful polished work of art at each stage you risk having to throw it away if you made a wrong assumption, and at best, waste a lot of time.
Don’t overthink things. Get something working first. If you need a test to drive that process so be it, but that doesn’t mean it needs to be anything fancy or commit worthy.
While doing this I also found one more benefit, at least for my use case. The backend for user login was simple when I started, but it started growing in a few weeks. Writing test cases saved me from manually logging in with each use case, testing some functionality, then logging out and repeating with other use cases.
Not sure if it is a practical benefit or not, but writing test cases initially also helped me rewrite the way I was configuring Redis for a custom module so that the module can be tested better.
My only issue is that it takes time, and selling higher-ups this was kind of difficult.
Here are cases where I've genuinely found it valuable and enjoyable to write tests ahead of time:
Some things are difficult to test. I've had things that involve a ton of setup, or a configuration with an external system. With tests you can automate that setup and run through a scenario. You can mock external systems. This gives you a way of setting up a scaffold into which your implementation will fall.
Things that involve time are also great for setting up test cases. Imagine some functionality where you do something, and need 3 weeks to pass before something else happens. Testing that by hand is effectively impossible. With test tools, you can fake the passing of time and have confirmation that your code is working well.
Think about when you are writing some functionality that requires some involved logic, and UIs. It makes sense to implement the logic first. But how do you even invoke it without a UI? Write a test case! You can debug it through test runs without needing to invest time in writing a UI.
Bugs! Something esoteric breaks. I often write a test case named test_this_and_that__jira2987 where 2987 is the ticket number where the issue came up. I write up a test case replicating the bug in with only essential conditions. Fixing it is a lot more enjoyable than trying to walk through the replication script by hand. Additionally, it results in a good regression test that makes sure my team does not reintroduce the bug again.
I once had to write an integration for a "soap" web service that was... Special. Apparently it was implemented in php (judging by the url), by hand (judging by the.. "special" features) - and likely born as a refractor of a back-end for a flash app (judging by the fact that they had a flash app).
By trial and error (and with help of the extensive, if not entirely accurate, documentation) via soapui and curl - i discovered that it expected the soap xml message inside a comment inside an xml soap message (which is interesting as there are some characters that are illegal inside xml comments.. And apparently they did parse these nested messages with a real xml library, I'm guessing libxml.) I also discovered that the Api was sensitive to the order of elements in the inner xml message..
Thankfully I managed to conjure up some valid post bodies (along with the crazy replies the service provided, needed to test an entire "dialog") - and could test against these - as I had to implement half of a broken soap library on top of an xml library and raw post/get due to the quirks.
At any rate, I don't think I'd ever got that done/working if I couldn't do tests first.
Obviously the proper fix would've been to send a tactical team to hunt down the original developers and just say no to the client...
I frequently redesign/rewrite an implementation a few times before committing it, often changing observable behaviors, all of which will change what the tests need to look like to ensure proper coverage. Some code is intrinsically and unavoidably non-modular. Tests are dependent code that need to be scoped to the implementation details. Unless you are writing simple CRUD apps, the design of the implementation is unlikely to be sufficiently well specified upfront to write tests before the code itself. Writing detailed tests first would be making assumptions that aren't actually true in many cases.
I also write thorough tests for private interfaces, not just public interfaces. This is often the only practical way to get proper test coverage of all observable behaviors, and requires far less test code for equivalent coverage. I don't grok the mindset that only public interfaces should be tested if the objective is code quality.
When practical, I also write fuzzers or exhaustive tests for code components as part of the test writing process. You don't run these all the time since they are very slow but it is useful for qualifying a release.
"The point of writing tests is to know when you are done. You don't have to write failing tests first if you are just trying to figure out how to implement something or even fix something. You must write a failing test before you change prod code. How do you do square this seeming circle?
- Figure out what you need to do
- Write tests
- Take your code out and add back in in chunks until your tests pass
- Got code left over? You need to write more tests or you have code you don't need
Without the tests, you cannot know when you are done. The point of the failing test is that it is the proof that your code does not do what you need it to do.
Writing tests doesn't have to slow down the software development process. There are patterns for various domains of code (e.g., controller layer, service layer, DAO layer). To do testing efficiently, you need to learn the patterns. Then when you need to write a new test, you identify and follow the pattern.
You also need to use the proper tools. If you're using Java or Kotlin, then you MUST use PIT (http://pitest.org). It is a game changer for showing you what parts of your code are untested."
- Steven, Senior Software Engineer on our team
If I'm working with well-known tools and a problem I understand reasonably well, I'll approach it in ultra-strict test-first style, where my "red" is, "code won't compile because I haven't even defined the thing I'm trying to test yet". It might sound a step too far but I find starting by thinking about how consumers will call and interact with this thing results in code that's easier to integrate.
However, if I'm using tools I don't know well, or a problem I'm not sure about, I much prefer the "iterate and stabilise" approach. For me this involves diving in, writing something scrappy to figure out how things work, deciding what I don't like about what I did, then starting again 2 or 3 times until I feel like I understand the tools and the problem. The first version will often be a mess of printf debugging and hard-coded everything, but after a couple of iterations I'm usually getting to a clean and testable core approach. At that point I'll get a sensible set of tests together for what I've created and flip back to the first mode.
Once I have those basic tests passing, I will often write a couple more tests for less common but still important execution paths. It's ok if these take a little longer, but only a little.
Beyond the obvious 'test driven' benefits, I find that, especially for the first round, writing those tests helps me solidify what I'm trying to accomplish.
This is often useful even in cases where I go in feeling quite confident about the approach, but there are some blind spots that are revealed even with the first level, most simple tests.
I find the basic complaints that others have posted here about pre-writing tests largely valid. "Over-writing" tests too early on is, for me, often a waste of time. It works best when the very early tests can be written quickly and simple.
And if they can't be, then I'll frequently take a step back and see if I'm coming at the problem from a poor direction.
When starting a new module / class I put a skeleton first, to establish an initial interface. Then I change it as I find, while writing tests, how it can be improved.
When dealing with bugs - red / green is incredibly helpful with pinpointing the conditions and pointing exactly where the fault lies.
When introducing new functionality I do most of the development as tests. Only double checking if it integrates once before committing.
Going test first pushes your code towards statelessness and immutability, nudging towards small, static blocks. As most of my work is with data, I find it to be a considerable advantage.
It provides little advantage if you already rely heavily on a well established framework that you need to hook to (e.g. testing if your repos provide right data in Spring or if Spark MRs your data correctly).
I tend to change/refactor a lot to minimise the maintenance effort in the long run. I would spend most of the time testing by hand after each iteration if not for the suite I could trust at least to some extent.
If I'm writing something where I know what the API to use it should be and the requirements are understood, yes, I'll start with tests first. This is often the case for things like utility classes: my motivation in writing the class is to scratch an itch of "wouldn't it be nice if I had X" while working on something unrelated. I know what X would look like in use because my ability to imagine and describe it is why I want it.
There are times, however, where I'm not quite sure what I want or how I want to do it, and I start by reading existing code (code that I'm either going to modify or integrate with) and if something jumps out at me as a plausible design I may jump in to evaluate how I'd feel about the approach first-hand.
I'm short, the more doubts or uncertainty I have about the approach to a problem (in the case of libraries, this means the API) the longer I'll defer writing tests.
Of course, there are always exceptions. if you have software that is highly complex but the outputs are very simple and easy to measure then it might actually be a good idea.
With the kind of software I mostly write these days, I'm fortunate to be able to incrementally develop my code and test it under real-world conditions or a subset thereof.
So my approach is exploratory coding -- I start with minimum workable implementations, make sure they work as needed, and then add more functionality, with further testing at each step.
The upside is that I don't have to write "strange" code to accommodate testing. The downside is that I'm forced to plan code growth with steps that take me from one testable partial-product to the next. A more serious downside, one I'm very aware of, is that not every project is amenable to this approach.
1. As a security system for your code
2. As a tool for thought, prompting the application of inverse problem solving
Both of these have costs and benefits. If you consider the metaphor of the security system, you could secure your house at every entry point, purchase motion sensors, heat detectors, a body guard, etc. etc. If you're Jeff Bezos maybe all of that makes sense. If you're a normal person it's prohibitively expensive and probably provides value that is nowhere near proportional to its cost. You also have to be aware that there is no such thing as perfect security. You could buy the most expensive system on earth and something still might get through. So security is about risk, probability, tradeoffs, and intelligent investment. It's never going to be perfect.
Inverse thinking is an incredibly powerful tool for problem solving, but it's not always necessary or useful. I do think if you haven't practiced something like TDD, it's great to start by over applying it so that you can get in the habit, see the benefit, and then slowly scale back as you better understand the method's pros and cons.
At the end of the day, any practice or discipline should be tied to values. If you don't know WHY you're doing it and what you're getting out of it, then why are you doing it at all? Maybe as an exploratory exercise, but beyond the learning phase you should only do it if you understand why you're doing it.
1)
Working as a part of an enterprise team on a big lump of TypeScript and React? Then you probably don't write tests before you code because a) TypeScript catches all the bugs amirite? and b) Your test runner is probably too hairy to craft any tests by hand, and c) You are probably autogenerating tests _after_ writing your code, based on the output of the code you just wrote, code which may or may not actually work as intended.
2)
Working on an npm module that has pretty tightly defined behaviour and the potential to attract random updates from random people? Then you _need_ to write at least some tests ahead of time because it is the only practical way to enshrine that module's behavior. You need a way to ensure that small changes/improvements under the hood don't alter the module's API. This means less work for you in the long run, and since you are a sensible human being and therefore lazy, you will write the tests before you write the code.
I do for some projects. For example currently I'm working on a project that has a high test coverage and most bugs and enhancements start first as a test and then they're implemented in code. TDD makes sense when the test code is simpler to write than the implementation code.
> Can you describe the practical benefit?
It may take some time to write initial tests but as I'm working with some legacy enterprise tech serializing all inputs and testing on that is a lot faster than testing and re-testing everything every commit on real integration servers.
Tests provide you with a safety net when you do refactors or new features so that the existing stuff is not broken.
> Do you happen to rewrite the tests completely while doing the implementation?
Yeah, I do. There are two forces at play - one of them pushes to test that cover more stuff in a black-box matter - they won't be broken as often when you're switching code inside your black-box. On the other hand if you've got finer grained test when they break it's obvious which part of code is failing.
> When does this approach work for you and when did it fail you?
It works for projects that are hard to test other way (we've got QA but I want to give them stuff that's unlikely to have bugs) and for keeping regressions at bay. It did fail me if I didn't have necessary coverage (not all cases were tested and the bug was in the untested branch).
I wouldn't also bother to test (TDD) scratch work or stuff that's clearly not on critical path (helper tools, etc.) but for enterprise projects I tend to cover as much as possible (that involves sometimes writing elaborate test suites) as working on the same bugs over and over is just too much for my business.
With a basic understanding of the problem and the expected solution, I start off directly with prototype code - basically creating a barely working prototype to explore possible solutions.
When I'm convinced that I'm on the right track (design-wise), I start adding more functionality.
When I'm at a stage where the solution is passable - I then start writing tests for it. I spend some time working through tests and identifying issues with my solution.
Then I fix the solutions. And clean up the solution.
At this point my test cases should cover most (if not all) of my problem statement, edge cases and expected failures.
When it comes to maintaining the solution, I do start with test cases though. Usually just to ensure that my understanding of the bug or issues is correct. With the expected failure tests, I then work on the fix. And write up any other test cases needed to cover the work.
I am still trying to figure out the best way to do unit testing with embedded C (working with Unity right now), but with Python development I try to write unit tests only for more tricky code.
Benefits--not pushing logic defects gives me more time to invest in other important stuff; I end up with tests that document all the intended behaviors of the stuff I'm working on (saves gobs of time otherwise blown trying to understand what code does so I can change it safely); I'm able to give a lot of attention to ensuring the design stays clean. Plus, it's enjoyable most of the time.
"They incur technical debt and bugs at the same rate as other code." Not at all true.
For the latter, its when I'm exploring a codebase or an API, writing a spike script just to see how things work and what kinds of values get returned, for example. Many times I'll turn the spike into a test of some sort, but a lot of times I just toss it when I'm through.
For the former, yes, I generally write tests before implementation, though I'm not religious about it. I'm just lazy. I'm going to have to test the code I write somehow, whether that's by checking the output in a repl or looking at (for example) its output on the command line. Why you wouldn't want to capture that effort into a reproducible form is beyond me. (And if you're one of those people who just writes up something and throws it into production, I really hope we don't end up on a team together!) I generally just write the test with the code I wish I had, make it pass somehow, rinse repeat. Its not rocket science. Its just a good scaffold to write my implementation against.
That said, I don't usually keep every test I write. As others have noted, that code becomes a fixed point that you have to deal with in order to evolve your code, and over time it can become counterproductive to keep fixing it when you change your implementation slightly. So the stuff I keep generally has one of three qualities: * it documents and tests a public interface for the API, a contract it makes with client code * it tests edge cases and/or bugs that represent regressions * it tests particularly pathological implementations or algorithms that are highly sensitive to change.
Honestly, I feel like people who get religious about TDD are doing it wrong, but people who never do TDD (i.e. writing a test first) are also doing it wrong in a different way. There's nothing wrong with test-soon per se, but if you're never dipping into documenting your intended use with a test before you start working on the implementation itself, you're really just coding reactively instead of planning it, and it would not surprise me to hear lots of complaints in your office about hard-to-use APIs.
If I'm doing something that is pretty well defined and essentially functional, where I know the inputs and outputs, I'll sometimes do the TDD loop. It can be good for smoking out edge cases; although unless you start dtifting into brute force fuzzing or property-based testing you still have to have the intuitions about what kind of tests would highlight bugs.
I'm more a "Make It Work, Make It Beautiful, Make It Fast" person and don't see it working by writing unit test first.
Once I am a bit more comfortable with the code and have a better understanding of what I need, I will start writing some tests. I usually don't write too many early on, this way I don't have to go back halfway through development and change all my tests. Only when I'm confident enough with the code do I start writing extensive tests and try to cover all cases.
For outright new code (think new objects, new API, and so on), I tend not to write the tests first because they become a cognitive load that affects my early design choices. In other words, I am now writing code to make the tests pass, and have to exert effort not to do that.
The major thing is that the tests becomes a boundary of sorts which enables you to do a lot more then if you didn’t have it. It can also be done horribly wrong which was the reason why I stopped using it.
I see it as a tool to see how good your code and abstractions are. Large tests => leaky abstraction. Many details (Mocks/stubs) in the tests => leaky abstractions.
Also it reminded me that sometimes I’m trying to satisfy the language instead of just solving the problem. As soon you are trying to satisfy your language, code style/principle or architecture, you are now trying to solve something that has nothing to do with the problem and just causes the code to be designed wrong, or that I should move it somewhere else. Though if I need to tweak the code to make it more testable, I always do that.
I also have a rule, never test data, only test functionality. This have worked very well over the years creating pretty clean code and clean tests and I believe less bugs, or at least it’s hard to be sure. Though my perception that during the periods that I switched between the practices the TDD code had less bugs and I could confirm them faster than the code which had no tests. Also the code that was produced with TDD was a lot easier to make new tests for, where the non TDD code were really hard to write tests for, if I wanted to for example confirm a bug or a feature.
I write tests at the earliest point I feel appropriate - but rarely before I actually write code. I tend to work on greenfield projects, so writing tests before I write code rarely makes sense.
IMO, TDD only makes sense if you already know what you're going to write. This makes a lot of sense if you're working on a brownfield project or following predictable patterns (for example, adding a method to a Rails controller).
If I'm doing actual new development, as I code, I tend to write a lot of pending tests describing the situations I need to test. However, I don't typically implement those tests until after.
One of the biggest factors for me is so much of my code deals with handling some degree of unknown - what the client will need, exact how an API works, how errors/invalidations are handled, unexpected refactoring, etc.
In this case, it doesn't make sense to create tests before I write the underlying code. Most tests will have mocks/stubs/simulations that make assumptions about how the code works. At that point, a pre-written test is no better than code, since it's just as likely to contain errors.
I much rather do real-time debugging/interacting while developing then capture the exact interactions of outside systems.
> Can you describe the practical benefit? Testing first help me clarify my intentions, then implement a realisation of those intentions through code. Testable code has the side effect of being well modularized, free from hidden dependencies, and SOLID.
And it's also about making sure that whatever code you write, there's a justification for it and a proof that it works, could be seen more like a harness protecting you from writing things that you don't need, YAGNI.
> Do you happen to rewrite the tests completely while doing the implementation? I follow the classic TDD cycle, RED/GREEN/REFACTOR and I can not be any happier.
> When does this approach work for you and when did it fail you? The only exception to the above is exploratory code. I.e. the times where I don't know how to solve a given problem, I like to hack few things together and poke the application and see what happens due to what I have changed.
Having verified and learned more about how to solve that problem, I delete all my code and start afresh but this time TDD the problem/solution equipped with what I have learned from my exploratory cycle.
If you are in doubt or need further information to help you make your own decision about the matter, I can not recommend enough the classic TDD by Example from Kent Beck as a starting point.
For a more real-world view with an eye on the benefits of adopting TDD, have a look at Growing Object Oriented Software Guided by Tests, aka the Goose book.
To answer your question properly, you need to back up a bit. What is the benefit of TDD? If you answer is "To have a series of regression tests for my code", then I think the conclusion you will come to is that Test First is almost never the right way to go. The reason is that it's very, very hard to imagine the tests that you need to have for your black box code when you haven't already written it.
You might be wondering why on earth you would want to do TDD if not so that you can have a series of regression tests for your code. Remember that in XP there are two kinds of testing: "unit testing" and "acceptance testing". An acceptance test is a test that the code meets your requirements. In other words, it's a black box test regression test. You are very likely to do acceptance testing after the fact, because it is easier (caveat: if you are doing "outside-in", usually you will write an acceptance test to get you started, but after you have fleshed in your requirements, you normally go back and write more acceptance tests).
If acceptance tests are regression tests, why do we need unit tests. A common view of "unit tests" is to say that you want to test a "unit" in isolation. Often you take a class (or the equivalent) and test the interface making sure it works. Frequently you will fake/mock the collaborators. It makes sense that this is what you should do because of the words "unit" and "test".
However, originally this was not really the case as far as I can tell (I was around at the time, though not directly interacting with the principle XP guys -- mostly trying to replicate what they were doing in my own XP projects. This all to say that I feel confident about what I'm saying, but you shouldn't take it as gospel). Really right from the beginning there were a lot of people who disliked both the words "unit" and "test" because it didn't match what they were doing.
Let's start with "test". Instead of testing that functionality worked, what you were actually doing is running the code and documenting what it did -- without any regard for whether or not it fit the overall requirements. One of the reasons for this is that you don't want to start with all of the data that you need and then start to write code that produces that data. Instead you start with a very small piece of that data and write code that produces that data. Then you modify the data and update the code to match that data. It is less about "test first" as it is about decomposing the problem into small pieces and observing the results of your development. It does not matter if you write the test first or second, but it's convenient to write the test first because before you can write the code, you need to know what change you want the code to enact.
One of the reasons why the term "BDD" was invented was because many people (myself included) thought that the phrase "Test Driven Development" was misleading. We weren't writing tests. We were demonstrating behaviour of the code. The "tests" we were writing were not "testing" anything. They were simply expectations of the behaviour of the code. You can see this terminology in tools like RSpec. For people like me, it was incredibly disheartening that the Cucumber-like developers adopted the term BDD and used it to describe something completely different. Even more disheartening was that they were so successful in getting people to adopt that terminology ;-)
Getting back to the term "unit", it was never meant to refer to isolation of a piece of code. It was meant to simply describe the code you happened to be working with. If we wanted to write tests for a class we would have called it "class tests". If we wanted to write tests for an API we would have called it "API tests". The reason it was called "unit test" (again, as far as I can tell) is because we wanted to indicate that you could be testing at any level of abstraction. It's just intended to be a placeholder name to indicate "the piece of code I'm interested in".
I think Michael Feathers best described the situation by comparing a unit to a part in a woodworking project. When you are working on a piece, you don't want any of the other pieces to move. You put a clamp on the other pieces and then you go to work on the piece that you want to develop. The tests are like an alarm that sounds whenever a piece that is clamped moves. It's not so much that you are "testing" what it should do as you are documenting its behaviour in a situation. When you touch a different part of the code, you want to be alerted when it ends up moving something that is "clamped" (i.e. something you aren't currently working on). That's all. The "unit" you want to clamp depends a lot on how you want to describe the movement. It might be a big chunk, or it might be something incredibly tiny. You decide based on the utility of being alerted when it moves.
So having said all that, what is the benefit of TDD? Not to test the code, but rather to document the behaviour. I've thought long and hard about what that means in practical terms and I've come to the conclusion that it means exposing state. In order to document the behaviour, we need to observe it. We have "tests", but they are actually more like "probes". Instead of "black box" interactions (which are fantastic for acceptance tests) we want to open up our code so that we can inspect the state in various situations. By doing that we can sound the alarm when the state moves outside of the bounds that are expected. The reason to do that is so that we can modify code in other places safe in the knowledge that we did not move something on the other end of our project.
Anything you do to expose state and to document it in various situations is, in my definition anyway, TDD. Test First is extremely useful because it allows you to do this in an iterated fashion. It's not so much that you wrote the test first (that's irrelevant). It's that you have broken down the task into tiny pieces that are easy to implement and that expose state. It just happens to be the case that it's extremely convenient to write the test first because you have to know what you want before you can write it. If you are breaking it down in that kind of detail, then you might as well write the test first. And, let's face it, it kind of forces you to break it down into that detail to begin with. That's the whole point of the exercise.
There are times when I don't do test first and there are times when I don't do TDD. I'll delve into both separately. First, I frequently don't do Test First even when I'm doing TDD if I'm working with code that has already got a good TDD shape (exposed state with documented expectations). That's because the "test" code and the production code are 2 sides of the same coin. I can make a change in the production behaviour, witness that it breaks some tests and then update the tests. I often do this to stress tests my tests. Have I really documented the behaviours? If so, changing the behaviour should cause a test to fail. If it doesn't, maybe I need to take a closer look at those tests.
Additionally, I don't always do TDD. First, there are classes of problems which don't suit a TDD breakdown (insert infamous Sudoku solver failure here -- google it). Essentially anything that is a system of constraints or anything that represents an infinite serious is just exceptionally difficult to break down in this fashion (woe be unto those who have to do Fizz Buzz using TDD). You need to use different techniques.
Jonathon Blow also recently made an excellent Twitter post about the other main place where you should avoid TDD: when you don't know how to solve your problem. It is often the case that you need to experiment with your code to figure out how to do what you need to do. You don't want to TDD that code necessarily because it can become too entrenched. Once you figure out what you want to do, you can come back and rewrite it TDD style. This is the original intent for XP "spikes"... but then some people said, "Hey we should TDD the spikes because then we don't need to rewrite the code"... and much hilarity ensued.
I hope you found this mountain of text entertaining. I've spent 20 years or more thinking about this and I feel quite comfortable with my style these days. Other people will do things differently and will be similarly comfortable. If my style illuminates some concepts, I will be very happy.
Write the code, secure it from refactoring stuff ups with your tests.
Basically, sometimes, it makes sense to write tests beforehand, but most of the time, I use test harnesses, and "simultaneous test development."
Works for me. YMMV.
Rewriting the tests completely does not really happen. Sometimes I am not entirely sure of all the things that the production code should do so then I go back an forth between test and executable code. In that case one needs to be very aware of whether a failure is a problem in the executable code or the test code.
It pretty much works all the time. Occasionally there are the exceptions. If a thing is only visual, e.g., in a webinterface it may be best to first write the production code because the browser in my head may not be good enough to write a correct test for it. Also, in the case of code that is more on the scientific/numeric side of things one may start out with more executable code per test than one usually would. I still write the test first in that case, though.
however, I've started working on a project with others, and am becoming a bit more adamant on "this needs tests". Codebase had none after a year, and the other dev(s) are far more focused on code appearance than functionality. Fluent interfaces, "cool" language features, "conventional commit" message structure, etc are all prized. Sample data and tests? None so far (up until I started last week).
I've had push back on my initial contributions, and I keep saying "fine - I don't care how we actually do the code - change whatever I've done - just make sure the tests still run". All I've had is criticism of the code appearance, because it's not in keeping with the 'same style' as before. But... the 'style' before was not testable, so... there's this infinite loop thing going on.
The other place it works well is code written as a pair - with one member writing tests and the other writing the implementation - the challenge is on to pass the buck back to the other pair member - i.e. find the obvious test that will cause the code to fail / find a simple implementation that will cause the tests to pass. This is great fun and leads to some high-quality code with lots of fast tests.
The benefit of TDD is that your coverage is pretty high - and you aren't writing anything unnecessary (YAGNI).
I don't think I have ever rewritten tests that I have written (TDD or otherwise). They might get refactored.
TDD doesn't work so well when you only vaguely understand what you are trying to do. This is not a coding / testing problem - get a better vision - prototype something perhaps - i.e. no tests and very deliberately throw it away.
When someone tells you they don't write tests first, ask them how they refactor. How do they know the changes they made didn't break anything?
You can fool yourself with test-first, but it's quite difficult to do if you're rigorously following the practice. First write a failing test. Next, write only enough production code to fix the failing test. Optionally refactor. Rinse and repeat.
Code created this way can prove that every line of production code resulted from a failing test. Nothing is untested, by definition. The code may be incomplete due to cases not considered, but everything present has been tested. Note that it's possible to break this guarantee by writing production code unnecessary to get the test to pass.
The issue I find is that generally we aren't writing code we know the exact requirements for, so doing TDD means that not only are you refactoring your code as you understand the problem better, but you're also refactoring your tests, which increases the workload.
Maybe that's a sign that we need to spend a lot more time designing before implementing, but I've never worked anywhere that happens enough to use TDD as nicely as my experience with my Sinon clone.
> The change curve says that as the project runs, it becomes exponentially more expensive to make changes.
> The fundamental assumption underlying XP is that it is possible to flatten the change curve enough to make evolutionary design work.
> At the core are the practices of Testing, and Continuous Integration. Without the safety provided by testing the rest of XP would be impossible. Continuous Integration is necessary to keep the team in sync, so that you can make a change and not be worried about integrating it with other people.
> Refactoring also has a big effect
- Martin Fowler
I used to write a lot of tests and discovered over summer that it costs too much in terms of time spent writing, changing, and debugging tests for what you tend to get out of it.
I do think writing a lot of tests for a legacy or relatively old system is a great way to uncover existing bugs and document expected behaviours. With that done, refactoring or rebuilding is possible and you gain a great understanding of the software.
However, when I need to overhaul something that already exists, e.g. the core of a game engine, I've gotten into the habit of writing tests for current behavior, so that when I rip it out its replacement works the same way, or at least retains the same interface, so I don't have to replace the whole pyramid on top before I can compile again. :)
This has also helped me realize the value of tests, but later on in the development cycle, not as the base before actually writing anything.
On a serious note, I find it hard to write the tests at the beginning for a code that I'm not sure how/what is gonna do. What do I mean by that? Well, as all you probably have experienced, requirements change during development, sometimes 3rd party/microservices/db constraints don't let us achieve what we want. We have to come up with hacky/silly solutions that would require us to rewrite most of the tests that we wrote.
A lot of times I don't even know how to code the stuff I'm required to build. How am I supposed to write tests in that kind of situations? I think it would be like building abstractions for problems that I don't know very well yet.
Plus writing HDL without tests is basically guaranteed to create something nonfunctional.
I hate unit testing in for example Java though, individual functions are typically very basic and don't do much. A service? Integration tests? Sign me up, but unit testing to 100% coverage a function with ten lines that reads a bytestream to an object and sets some fields is boring, and fairly difficult to mock.
In server-side, almost all code is atomic, very functional and is very easy to cover with tests, on different levels. I start with several unit tests before implementation and then add a new test for any bug.
The client, however, is a completely different story. It's a thick game client, and through my career I honestly tried adopting TDD for it - but the domain itself is very unwelcoming to this approach. The tests were very brittle, with a ton of time spend to set them up, and didn't catch any significant bugs. In the end, I abandoned trying to write tests for it altogether - at least I'll be able to write my own, functional and test-driven game engine, to begin with.
If it's an application or framework, I usually drive it from the UI, so tests are more an afterthought or a way to check / ensure something.
I find the best balance is to have thick libraries and thin applications, but YMMV.
https://m.youtube.com/watch?v=URSWYvyc42M https://www.destroyallsoftware.com/talks/boundaries
But I've not yet been convinced that any of the various polls are very authoritative. So I dunno.
My typical practice is to work out an API, write early scratches of implementation and test only simple cases. Then I can inspect two things: how the API works in real code and what else should be tested. In other words tests help to establish an API, then to stabilize the implementation.
I usually finish by checking the test coverage and trying to make it reach 100% branch coverage if I have the time. The coverage part is important because it usually makes me realize things could be made simpler, with fever if/else cases.
I could never get used to writing only the tests first simply because all the compilation errors get in the way (because the module doesn't exist yet).
That being said, I never write all the possible tests before starting with the implementation. They're called unit tests for a reason -- I generally write at least a few tests for a particular unit (say, a function or method) and then write the implementation before starting working on another. And I often go back and add extra tests for an already implemented unit to cover some edge cases and error conditions.
Integration tests, sometimes. Depending on the complexity of the system, I might skip this part. If it is a collaborative work then integration tests are (in my view) mandatory for ensuring that everyone’s code plays nice with everyone else’s.
Unit tests, almost never. Unless it’s something absolutely production critical, pull-your-hair-out-at-5-on-a-Friday kind of feature, it’s usually not worth the extra time putting unit tests together.
Also I try to test at subsystem / API boundaries whenever possible. Small units like a function rarely get their own tests, they are covered implicitly by being used. This avoids tests of arbitary internals (that should be free to change) becoming a maintenance burden. External APIs should be stable.
I find literal TDD distracting and unhelpful, but having a list of things I need to handle that doubles as tests I can't forget to write is a really nice balance.
For complex tasks, breaking down the problem into a single requirement per test helps me understand it better, and ensure I don't introduce regressions while refactoring or adding new requirements.
However, a lot of modern code is hooking up certain libraries or doing common tasks that don't get a lot of value from unit tests (mapping big models to other models, defining ReST endpoints, etc), so I don't generally write unit tests for those (but integration)
I almost never write tests for personal projects, but 100% of the time when working in a team. IMO, tests are not here to prevent bugs, but they are part of the developer documentation: a coworker must be able to make any change they want to my code without asking me anything, and tests are the biggest part of that.
What I do do is add a unit test for (almost) every formal bug I come across, (to prove the bug, and fix it) so that but never happens again. Which over the years seems to have given the best results for backwards compatibility, stability etc
I rarely practiced TDD until I started working on a piece of software that could take anywhere from under a minute to an hour to finish running. This means I must be able to isolate the specific portion of code effectively to save time and focus on the problem at hand. The APIs for this model are well written so I can recreate a bug or test out new code effectively by stitching some APIs in a unit test. It's incredibly helpful in that sense.
I've had Sr. Soft. Engs. ask me why I thought we needed unit tests at all. I've had managers not know what they were. I've worked on projects where I was the only developer who wasn't afraid of the technology, but management couldn't give proper requirements. I've also worked in code bases where no testing framework (of any kind) existed.
I don't mean to sound combative. Looking back those places would benefit immensely from structured testing. Life got in the way though.
And i do modify tests, because sometimes assumptions are wrong(or domain experts change their mind or got something wrong)
Sadly functionality requirements are very soft in my industry(once it was requested from me to do a perfect fuzzy match..)
Most of the time i am required to modify untested legacy code, that starts with test(if it is even possible).
When building the client application, I typically just manually test as I go along. Bugs happen, but because the foundation code is all REST API, the bugs are usually and easily fixed.
Sometimes the change is so trivial or type safe that it's not worth it, so I don't.
Sometimes I don't understand the problem well enough, so I learn more about it by doing some exploratory coding and prototyping. I usually come back and write a test after the fact.
Sometimes the project is on fire and I'm just throwing mud a the wall to see what sticks.
I used to write tests for all pure functions because they were the easiest tests to set up. They are also the easiest to debug so the test didn't really help unless you're checking for type signatures.
I think that implementation tests are important but I found that I suck at figuring our how to set up a test before the actual implementation. So I do them after the fact and judiciously.
Testing simple code is simple and therefore pretty much useless. Testing complicated code is complicated and therefore more likely to fail by making to few or to many assumptions in the test, or completely screwing up the test code itself.
But most of the time, fixing some bug or implementing a feature is more of experimenting and prototyping at first. Writing tests for every futile attempt would be a waste of time.
At best we design some small architecture first with interfaces, and then create the tests off that.
The developer does a round of these tests as best they can. Then they toss it to QA who tries to break it, but must also do the same tests.
It prevents a lot of bad design bugs, but adds almost no overhead.
Automated testing should be applied only where this manual testing becomes tedious or where we often make mistakes in testing.
Effectively, in writing tests first you make assumptions about the code. These don't always turn out to be true.
I could write a test plan first, but I haven't always fully designed the interfaces until I've ploughed into the code and figured out what needs to be passed where, so there would be a lot of repeat effort in fixing up the tests afterwards.
Effectively, in writing tests first you make assumptions about the code. These don't always turn out to be true.
Being too proper, doing everything by the book does not always translate to better code or good ROI
Also being an older fart and programming for so many years I am usually pretty good at not making too many bugs anyways.
I do TDD on Core, especially on mission critical code.
The Shell however have almost zero automated tests.
I once had a job that didn't require any. I was given a specification and my job was to write some DSL-code of it. This would have been an excellent setup to practice TDD! Unfortunately, I wrote a script that basically translates the specification from word documents into DSL snippets and quit soon after.
Also most of my tests revolve around business logic, where I need to test multiple versions of data.
The best advice I could give would be test cases around errors.
That's usually where most bugs are found, when something doesn't return what you expect.
Other situations are coming just from experience: you know that some part will have a lot of special cases, so you implement a test as soon as you think of the next special case.
- when it’s really easy
- when it’s really important
- before a refactor
The last one is arguably the most important and has saved me a lot of headaches over the years.
But for code that’s trivial to implement, it seems unnecessary...
I have found that beating code is a great way to preserve my sleep and save the next person a headache.
I just realized that Never have I ever version for programmers will be quite interesting.
I give priority to end to end working of the software stack.
I always makes sure there that test suite should be executed in parallel threads.
I make sure that tests are written before I merge my code in the master branch.
I like BDD, it helps me focus on the goal.
I feel that it lets me find the right approach faster.
Also, helps avoid distractions and optimizing things too early.
Related: „Write tests. Not too many. Mostly integration.”, httsps://kentcdodds.com/blog/write-tests/
At the new feature level, I have found not a lot of use for TDD.
When I start to build something, I don't exactly know how it will work and what the output will be.
TDD is really good for stuff that doesn't have a native test workflow (headless invisible stuff) as it can double up as a test harness, so for stuff like message queues its great. For user interfaces its pretty crap though because you already have a test harness and your eyes are much better at testing.
This way I can mostly stay in the frontend where the tooling is better and the results are more visual, but still be confident that I don't violate the proper abstraction that the backend requires.
Kinda like predefining the shared module contract between front- and backend and forcing myself not to forget about it when in the (frontend) flow
Most projects combing "discovery" with "development" making up-front test writing a poor use of time.
Don't tell anyone.
- I know the topic well. - I understand the domain well. - I can picture the technology, expectations and API well. - I have the mastery of my time and deadline.
Then yes, I do.
E.G: I'm currently making a proposal for a new API in an open source python lib called iohttp.
I have time. I have leeway. I have a good overview of the entire problem. So I can afford thinking about the API first:
https://github.com/aio-libs/aiohttp/issues/4346
Then I will write unit tests. Then I will write the code.
But that's a rare case.
Many times coding involves fiddling, trying things out, writing drafts and snippets until something seems to start to solve what you want.
Other times, you want a quick and dirty script or you have a big system, but you are ok with it to fail from time to time. The cost of testing is then not worth it. You'll be surprised of how well a company can be, and how satisfied customers says they are despite the website showing a 500 once in a while.
And you course, you have limited resources and deadlines. Unit tests are an upfront payment, and you may not be able to afford it. Like it's often the case, this means the total cost of the project will likely be higher, but the initial cost will be in your range of price. And you may very well need that.
One additional things few people talk about is how organizations can make that hard for you. You may be working in orgs where the tooling you need (CI, version system, specific testing lib...) may be refused to you. You may even be working in companies where you cannot get clear information about the domain, but get a vague specs, and the only way to design the software is to ship, see it break, and wait for customers to report to you because otherwise the marketing people don't let you talk to them. I'd say change job, but it's not the point.
At last, you have the problem of the less experienced devs. TDD is hard. It requires years of practice in the field to be done properly because you build a system in a completely abstract way. Dependency injection, inversion of control, mocking and all that stuff you need to make a properly testable system is not intuitive: you learn it on the way. Even harder is the fact you have to use it in your head since your are not coding it, but what wraps it, first. And even more terrible is the fact that baldly implemented, over used and over engineered, design patterns make problem worst, not better.
with that said, if i'm about to tackle something that i _know_ is likely to have bugs, especially parser implementations, i will always start by isolating it into a separate file, mocking the dependencies, building test cases, and compiling and running natively on my workstation. i write test cases before i start the implementation, and continue adding to them throughout the process. when i'm satisfied, i copy-paste back into the real codebase and do light integration testing.
these tests ultimately get thrown away, but i genuinely feel that they help me arrive at a correct implementation more quickly than integration testing alone. honestly, it just helps me feel more confident that i'm not going to embarrass myself when the code hits the field. this technique doesn't really help me with business logic, unfortunately, because accurately mocking the dependencies is insurmountable.
tl;dr: i use TDD when i think it will save me time, but i don't keep the tests around because tooling sucks.
i'm posting this partially in the hopes that people have tooling advice for me.
For most (but not all) software projects, writing tests before you write code is wrong.
For many (but not all) API-driven projects, I will write tests alongside my code. So if I would write a few dummy lines of code at the bottom of a file to confirm something is working, I'll write that up as a test instead.
In order for that process to work, writing tests needs to be extremely easy -- you need to be able to add a test anywhere without thinking much about it or wasting time pre-organizing everything.
On that note, shameless self-plug for Distilled (https://distilledjs.com), a testing library I wrote that I use in all of my projects, and that I like a lot.
The reason Distilled prioritizes flexibility is that I strongly believe there is no single, right way to do testing that can be applied to every project.
- For Distilled, I do TDD development where I write tests before code. This is because Distilled has a rigid API and behaviors, and because I use my tests as documentation. Distilled aims to have 100% coverage: https://gitlab.com/distilled/distilled/blob/stable/src/tests...
- For Serverboy, I only do integration tests based on test ROMs and comparing screenshots. Those screenshots get written out to my README and show the current emulator compatibility status. With Serverboy, I only care about the final accuracy of the emulator: https://gitlab.com/piglet-plays/serverboy.js/blob/master/tes...
- For projects like Loop Thesis (https://loop-thesis.com), I do a combination of unit tests and integration tests. I don't aim for 100% code coverage, and I think of my tests as a form of technical debt. For Loop Thesis I'm also adding performance tests though that let me know when the game is getting more or less efficient.
- And with exercises or experiments, I add tests haphazardly on the fly alongside my implementation code, putting very little thought into organization: https://gitlab.com/danShumway/javascript-exercises/blob/mast...
So every project has slightly different requirements and goals, and those goals drive my testing practices, not the other way around.