Now, anything less than such functionality is second or third-rate to me. I can give older languages a pass, but anything not having this moving forward is disappointing now.
testcontainers so I can write proper integration tests instead of mocking dependencies away.
AssertJ for fancy assertions with great failure messages.
Mockito for the rare mocking/spy usage.
Awaitility for the rare async/timing based test.
When there is a Frontend (templated or SPA, doesn't matter), I like writing the tests with Selenide assertions and using its Selenium integration to run against a large array of browsers + variants (Chrome, FF, Mobile etc) launched via testcontainers.
Overview: https://eng.amperity.com/posts/2019/04/greenlight GitHub: https://github.com/amperity/greenlight
In terms of frameworks, I am a big fan of testify. [0] Unfortunately it doesn't seem like the testify maintainers want to incorporate generics. [1] I'm going to be releasing a library soon to address that.
I'm also going to be releasing a golang+python+typescript library for doing super cheap/fast database-backed tests. In my last job I found it incredibly useful, it essentially made it ~0 cost to write tests that exercised database-related codepaths and logic, which for most business apps is everything important.
I worked with Jasmine and Jest as well, but Vitest was created recently and to me it seems to have made good improvements over both.
My go-to choices per language are:
- Python: Hypothesis https://hypothesis.readthedocs.io/en/latest (also compatible with PyTest)
- Scala: ScalaCheck https://scalacheck.org (also compatible with ScalaTest)
- Javascript/Typescript: JSVerify https://jsverify.github.io
- Haskell: LazySmallCheck2012 https://github.com/UoYCS-plasma/LazySmallCheck2012/blob/mast...
- When I wrote PHP (over a decade ago) there was no decent property-based test framework, so I cobbled one together https://github.com/Warbo/php-easycheck
All of the above use the same basic setup: tests can make universally-quantified statements (e.g. "for all (x: Int), foo(x) == foo(foo(x))"), then the framework checks that statement for a bunch of different inputs.
Most property-checking frameworks generate data randomly (with more or less sophistication). The Haskell ecosystem is more interesting:
- QuickCheck was one of the first property-testing frameworks, using random genrators.
- SmallCheck came later, which enumerates data instead (e.g. testing a Float might use 0, 1, -1, 2, -2, 0.5, -0.5, etc.). That's cute, but QuickCheck tends to exercise more code paths with each input.
- LazySmallCheck builds up test data on-demand, using Haskell's pervasive laziness. Tests are run with an error as input: if they pass, we're done; if they fail, we're done; if they trigger the error, they're run again with slightly more-defined inputs. For example, if the input is supposed to be a list, we try again with the two forms of list: empty and "cons" (the arguments to cons are both errors, to begin with). This exercises even more code paths for each input.
- LazySmallCheck2012 is a more versatile "update" to LazySmallCheck; in particular, it's able to generate functions.
I think Test Anything Protocol [0] [1] is neat. Anything that takes inspiration from TAP is probably gonna be mostly structurally similar anyway.
It's easier to point out things I dislike in a testing framework. In general, anything that gets too fancy or magical, or assertion frameworks with lots of object chaining to read like natural language.
There's another testing library in Ruby that is called minitest. The syntax is different, for example you would write "assert_equal num1, numb2". My problem with this is that you don't know which is the expected vs the actual values. Is it the first or second argument? On the otherhand, my framework of choice, RSpec would write this as "expect(num1).to eq(numb2)". The syntactic sugar alone makes this easier to understand.
So I wrote a small test runner for React (`react-test`) and added few custom Jest matchers, so you can do:
const button = $();
expect(button).toHaveClass('btn');
expect(button).toHaveText('Hello World');
The Smalltalk language and SUnit library had a really great signal ratio without a lot of boilerplate that goes with a lot of other xTest frameworks.
Shameless plug — I was the author of said tool.
* The setup pattern with "just in time" variables is amazing.
* It offers some _extremely_ terse tests
* It offers a huge library of plugins.
I've tried making something similar in Go (https://pkg.go.dev/github.com/houseabsolute/detest/pkg/detes...) but the language makes having as nice an API impossible, and I really don't love what I've come up with there. The output from detest is pretty nice though (until it wraps lines in which case it's a mess).
It lets me combine HSpec for unit tests and Hedgehog for property tests in the same test files with test auto-discover. For failed properties, it prints the seed.
And it has really pretty, readable output.
I’m using Rust’s tests currently, and the output is just a lot less obvious. E.g. green text when 0 tests are run.
Asking everyone for their favorites is not likely to meet that goal.
> when they all seemingly do the same thing
You haven't actually looked into this at all before asking, have you?
(For the background, I used to work at OVHcloud and Venom was developed by the core/platform tool. I'm usually not a big fan of in-house tooling when I can avoid it, but I found Venom's paradigm so good that I still use it to this day and can't imagine not using it to test my APIs now)
It's an integration testing tool in which test suites are written declaratively in YAML. It's completely language agnostic, and you can be 100% sure you're actually testing behaviors and contracts. This was once very useful to us as we migrated an old Python API to Go, with the same interface contract. We just kept the same test suites, with pretty much no changes.
A very basic HTTP API test would look like this:
- type: http
method: GET
url: http://localhost:8080/ping
assertions:
- result.statuscode ShouldEqual 200
- result.bodyjson.status ShouldEqual ok
But where it shines in my opinion is that you can not only make HTTP calls, but also database calls. So when you implement and test a DELETE endpoint, you can also make a query to check you didn't delete ALL the database: - type: sql
driver: postgres
dsn: xxx
commands:
- SELECT * FROM table
assertions:
- result.queries.queries0.rows ShouldHaveLength 8
You can also load fixtures in database directly, work with Kafka or AMQP queues both as a producer (e.g. write an event to a Kafka queue, wait a few seconds and see that it was consumed by the service you test, and that some side effects can be observed) or as a consumer (e.g. make sure after an HTTP call, an event was correctly pushed to a queue), or even read a mailbox in IMAP to check that your service correctly send an email.It's a bit rough on the edges sometimes, but I'd never go back on writing integration tests directly in my programming language. Declarative is the way to go.
This is all inspired by the sqllite logic test framework.
I'm looking for something for Node.js. I've used jest, but it's very heavy and does all sorts of magic and is a whole build system. Whereas I just want a test runner with utilities and reporting.
It is easy to have an idea for a testing framework that you can quickly write yourself. That is why there are so many. It is tedious to add all the features people want so many are not quite as good as their hype.
I'd love to find a JavaScript testing framework that's as pleasant to work with.
I wrote a bit about those two features here: https://simonwillison.net/2018/Jul/28/documentation-unit-tes... - about half way down.
Well, why are we using cut-down Chrome with a bunch of js scripts and pretend it's a native app or a web server?
The only real “test” is whether it works in production. Everything else is a poor substitute and gives you false confidence. It’s absurd the amount of time and effort we spend on writing tests and we still have as many, if not more bugs in our software than we did before writing tests became a religion.
I think every language having its own testing framework is good, even for things like functional tests which can often be externalised. Tests are an essential part of every project and should be well integrated with the rest of the codebase and the team creating it. Often, the tests are the only good place to go and see what an app actually _does_ and so they form an essential part of the documentation.
In my experience it's very rare that you can effectively create and maintain something like Cucumber tests owned by anyone but the team implementing the code so there's little benefit to translating from a text DSL like that. But the language used is definitely useful, so what I like to see is code in the implementation language that matches the Given/When/Then structure of those tests, but instead of reusable text steps you just have reusable functions which take parameters. This means you can easily refactor, and use the full functionality of your IDE to suggest and go to definitions etc. No matter what, you should treat your test code the same way you do everything else - abstractions matter, so functional tests at the top level should rarely just be about clicking on things and asserting other things, they should be in the language of the domain.
Functional tests are worth much more than unit tests. No only do they test the only things of actual business value, they are also more robust in the face of implementation refactorings and so require less rework (unless you're being overly specific with CSS selectors etc). Unit tests are often highly coupled to specific implementations and can be a poor investment, especially early in a project. I believe a good balance is functional and integration tests that explore the various paths through your app and prove everything's hooked up, coupled with property based unit tests for gnarly or repetitive logic that isn't worth endlessly iterating via the UI. All other unit tests are optional and at the discretion of the implementer.
You should be able to mock out every major articulation point in your code, but it's generally preferable if you can mock _real_ dependencies. That is, instead of mocking out a 'repository' abstraction that looks stuff up and returns canned data, have a real test database against which you look up real data (created by steps in your functional tests). This reduces risk and cognitive overhead (you're not having to encode too many assumptions in your test suite) and doesn't have to be as slow as people like to make out - Embedded Postgres is quite fast, for example:
https://github.com/zonkyio/embedded-postgres
Same with network services - it's not slow to chat to localhost and you'll find more issues testing proper round-trips. I have not found "assert that you called X" style testing with mocks useful - you care about outcomes, not implementation details.
Beyond all that, as long as you can make assertions that generate clear error messages, you're fine.