This is great for small features, but it gives me pause for mid or large features. Are we the only ones doing this?
There should be a adversarial, yet respectful, relationship between dev and QA where dev tries to get their software released and QA is finding reasons why it shouldn't be released.
If there's only developers who test their own code then they're not incentivized for finding the ways their code breaks. They'll downplay the faults or even be blind to them because they're too close to the implementation and aren't "thinking like a user".
It seems odd to me when those in charge ignore the problems of disbanding QA. But I assume it's because the management may not be incentivized to promote quality software. Instead the incentives are to crank out features.
That said, maybe the market for the software doesn't punish the software makers for having buggy software. In such a case, ignoring QA work is a rational decision.
I can't count the number of times I've tested something 5 times, convinced myself that the code handled tons of error cases, shipped it, and in 10 minutes a customer types "five" into a field expecting an integer which causes the next step to use "0" for processing.
1. QA teams come up with new ways to break software in the way humans use it. Automated tests etc confirm the software works as designed.
2. Because the org making this choice isn't stupid, they measure bug frequency and impact and find that "only X users have Y impact where Y is ranked as low" per each bug. No one bug is ever worth fixing. But, user X aren't seeing that one bug, they get hit by tens or hundreds of these on a regular basis. As a result, their perceived experience with a given product is poor and it feels buggy.
This idea that you can measure bugs in isolation and deliver perceived product quality is faulty. A user experiences a product as a whole, and if you can't evaluate that as well, you will be seen as delivering a poor product.
Before I joined my team, it didn't have a dedicated QA member. The quality of the software was fine, but there were other compromises. The team didn't have a good test strategy - every developer made adhoc decisions on how their code was tested. Our E2E tests ran slowly and had tonnes of duplicates, since nobody had gone through the entire list and cleaned it up. A dedicated QA member has the time and the responsibility to solve problems with quality that most devs are likely to treat as secondary to developing features. And the improvements are substantial - our E2E tests now run 18x (!) faster.
For the most part, I see my role as being an enabler - I help devs build quality into their work and hold them accountable to it. I don't see it as an adverserial division of responsibility, but more as a collaborative effort, with the dev and QA coming from a different focal point.
The problem with strict division of labour and adverserial QA is that it leads to
1) Organizational / team silos
2) Local optimization (software development and QA is intricately interwoven, so it doesn't lend itself well to this strict division)
3) Information overhead and loss
A good QA engineer, IMO, requires better coordination and communication skills than a developer because they liaise with multiple developers and inspire them to make quality a priority in their work. Further, QA is involved from the definition of a feature (in sprint refinements) right up to when code is deployed and runs well in production. Therefore, at certain points, the boundaries between QA and the Dev team or QA and the PO blur and disappear entirely.
I found this to be a great read that summarizes my thoughts on the topic: https://www.thoughtworks.com/en-de/insights/blog/qa-dead
I will say however that I think you do generally still need a team (or individual at smaller shops) dedicated to tooling for automated testing to make the process of writing tests efficient for all those who are writing tests.
One of the biggest challenges with it for me is that a dev who is a dev is a very different personality who a dev who manages the entire SDLC.
The former placed in the latter environment is going to be lost and annoyed at all the moving parts. They signed up to write code and to trust that the process would deliver good requirements.
The latter in the former environment is going to be frustrated by all the process. They signed up to really own the product and realize that they really just own a block of a process.
I suspect either can work if you have people suited for each. Switching people between them sounds like a recipe for attrition though.
In my team, they are fully integrated into the decision making process and roadmaps, they are CCd to almost all the team's internal emails, and a format version isn't released without their authorization.
Now, there are some problems with being a QA, personal stuff like having a low self-esteem since "you're not smart enough to be a dev", and enmity between devs, PMs and QA due to feelings that QA hold up the teams. As a former QA myself I'm fully aware of them. In our particular team these issues are negligible (well, at least I hope they are). I personally fully respect the QA team and stop whatever I'm doing to help them with whatever they need or ask. I see them as my first line of defense against bugs, especially the kind of UX bugs that are difficult to test for.
By putting all those responsibilities on the devs, they have essentially turned a parallel development system into a serial one, which is inefficient and, frankly, dumb.
On the other hand, with devs writing tests is there is learning curve and most devs have not written tests or think of it as a chore. They also don't have end-user perspectives and have biases towards things working.
Ideally in my opinion testing should be shared responsibility, where devs test feature they implemented and the QA team add checks from an end-user perspective.
P.S.- I'm co-founder of startup https://crusher.dev making QA easy by using a low-code approach.
This doesn't work for all companies; if you're building a system for children to learn how to type, you can't really hire a bunch of children to learn how to type... but I think for most companies, this can and does work.
The good thing about not having a QA team is that QA becomes a much more active part of the process where something must be tested by the developers before it's completed. You also don't have a single bottleneck if the QA team is overwhelmed. The drawbacks are that developers aren't always very thorough and that their testing time takes away from their development time.
If they weren’t there - engineers would spend at least 1/4th of their time performing QA duties with likely worse results and at significantly higher cost to the company, since their compensation is higher.
So it makes sense to do that only as a cost saving measure in hard times - you are picking losing which employees will be less detrimental in the long term, even if you expect that those that remain will be less productive.
In one situation I ran into a team who was very frustrated with their automation because it was so slow. After a bit of investigation, I found out they were setting up all of their test data through database migrations, which were 10+ minutes slow. I asked why they weren't using the API to setup test data, to which they replied that none of them had experience with the API. After hacking away it in my spare time for a few weeks, I had a perfectly functional API client, and the tests were converted over time out of using the migrations.
I saw this the other day, and while it might sound cheesy, but someone said that QA to them was "quality acceleration", which actually fits fairly well. I have never worked at a company who had enough testers to go around. Keeping a smaller group of centralized testers to handle the really tough technical challenges, and a larger group of embedded testers who evangelize on testing practices was the most successful approach.
In this case, I wouldn't say you absolutely need a full quality team, but having some folks whose primary focus is testing and test automation to help skill up your dev teams sounds reasonable.
Engineers here used to be the ones testing out the code, verifying their changes on pre-production environments, and keep an eye on the deployed code in production to make sure it's working well.
Now I get a sense that engineers do the work up to the point of where QA steps in, and throwing it over to them and moving on. The QA team is new, and so not very well versed in the product, and rely on the engineers a great deal on how to test things, which I get the feeling also ends with the QA team not going out of their way to test things the engineers didn't explicitly say to test. I think it may be playing out in a way where our quality assurance could get worse, and our engineers will end up being less well-versed in the product, and feel less responsibility for delivering quality, while also requiring another team to manage, and spreading work over different people. As a disclaimer I think we may just have a not great setup more-so than saying all QA is bad.
"smartphone doesn't boot reliably when the ambient temperature is below 40 degrees fahrenheit" (temperature lowers the battery voltage + high power usage during startup = voltage drops below FCC-mandated minimums for the antennas -> automated SoC shutdown)
"bluetooth controllers won't pair reliably in building 17" (too much interference to pair if there are more than ~100 other bluetooth devices nearby)
The combinatorial explosion of external factors in this kind of bug is insane - you just won't have a good time without dedicated folks whose focus is chasing these down (and I'll take a good QA over a good engineer for this sort of investigation, any day).
The most irritating aspect I notice is the tendency for having to do the same project multiple times because requirements, and user/devteam communication isn't what it should be.
QA's increasingly are facing an uphill battle against information overload, that management and typical textbook PM processes are very ill-equipped to detect; there's also the fact that your "contract" QA departmemt, does not have incentive alignment. The most horrible devs can be endured by a QA group that cycles out QA's on a 2-3 month basis. The in-house QA group that logs in to the same steaming pile of low-quality crap will soon either find themselves motivated to trim waste or otherwise increase the efficiency and processes where they are.
I can only ever get FTE QA's to even deeply learn the business... I can't do that generally with contractors. So if places are doing away with QA... I wouldn't be surprised. And I don't think the upswing in software origin'd suffering from my perspective is just a coincidence either.
It's now possible to test pretty much anything via automation, although front-end testing is still harder than most and there are some intangible UX issues that can only be experienced by human users.
But for testing APIs? It is normally possible to cover 100% of what is needed via a combination of unit tests and integration tests. This is especially true now that tooling exists that can bring up ephemeral test environments with dependencies such as databases etc.
It is a lot of work to maintain all of the tests that we have. However, we have an incredibly good track record of releasing on time and critical bugs are rarely introduced.
Developers write all sorts of tests(TDD + pair programming helps), and when we need an extra pair of eyes, we ask the product team to have a look at it.
But that's about it. QA's are involved more in the creative side of things post releases or have an independent path to see where the things could go wrong, more like doing monkey testing.
Are you a SaaS practicing continuous delivery? Can customers report bugs rapidly? Can you fix them rapidly? If you can reduce the turn around time of fixing a bug to less than 1 hour, you can use your customers to QA.
If so, you should not have a QA department. They only slow things down.
Of course, this depends on the type of business you are in.
Good QA is hard to find, and in general are underpaid. In my opinion, good QA is like good security researchers. They know how to break the system. These people are invaluable.
As a developer, you should want to write tests and encourage people on your team to love it. Good tests let you sleep at night and enjoy your weekends. If you perception is that "writing these tests are a waste of my time", I think that's actually not in your best interest.
I can deploy a prod EKS/CI/CD setup with Terraform scripts, and be deploying code to the public in a morning now.
Like other service jobs, IT is going to contract sooner than later. NoCode infrastructure is where k8s was in 2014.
IT workers thinking in IT patterns as usual is a huge blocker to progress. Elder workers domain knowledge is stored in DBs and source control everywhere. It’s trivial to parse and transform into another syntax.
Congrats; you trained your Copilot replacements without even knowing it.
The hardware product is a network security device that essentially works as a black box, with limited inputs and outputs. It requires so much domain expertise that the small group of us that work on it take an incremental approach both to feature development and to bug fixes, with a fair amount of design review, code review, discussion, prototyping and PoC, etc., before moving to actual testing.
Most recently, I've tended to do the bulk of the development and one of my colleagues the bulk of the testing. When we find errors, there is a lot of screen sharing to review code and logs, determine the best way forward, etc.
Our build process generates ISOs with several layers of security, from completely locked down release instances to a permissive mode with a development account for the privileged access necessary to run onboard diagnostics.
The caveat is that one of our main customers is a security agency with complementary expertise, and they tend to beat the bejeesus out of it.
Overall, we take a similar approach to our software product (design and code reviews, much screen sharing, discussion between development, marketing, and senior management on what it should do and what it should look like, etc.), but at some point it gets turned over to QA. I wrote above that our QA resource was full time from time to time, because when they are not doing QA on the product, they are doing market research and analysis, developing marketing strategy, etc.
From on QA PoV, one major the difference between the two products is audience: The hardware product is used by domain experts to do very specific network segmentation and data transfer; the software product is intended for general use by non-IT corporate users - it's a web application for integrated risk management and compliance reviews.
We consider ourselves competent to fully test the hardware product we've updated, because its functionality is essentially binary: either it works or it doesn't, there aren't a lot of variations in use case, and we are best positioned to determine how it should work and whether it is working.
We don't consider ourselves competent to fully test the software product, because we are a) too close to it, b) domain experts, c) not typical users, and because d) we have a lot of experience with a lot of different software.
Our users are often people who mostly use browsers, and occasionally office products. They may be domain experts, but their domains are far from IT (especially the physical security people).
Our QA person is excellent at getting into their shoes, so to speak, imagining how they might approach things, and findings both bugs and ways that our designs and GUIs and UXs lack and don't help users get to where they need.
Having someone with that external perspective is invaluable. Having them be full-time when in QA mode helps us move quickly when we need to - they are never the bottleneck, and anything released has been thoroughly beaten up.