Specifically, I would like to track what the requirements/specifications are, and how we'll test to make sure they're met? Which I don't know if this could be a mix of unit and integration/regression tests? Honestly though if this is maybe even the wrong track to take, I'd appreciate feedback on what we could be doing instead.
I used IBM Rational DOORS at a previous job and thought it really helped for this, but with a small team I don't think it's likely they'll spring for it. Are there open source options out there, or something else that's easy? I thought we could maybe keep track in a spreadsheet (this to match DOORS?) or some other file, but I'm sure there would be issues with that as we added to it. Thanks for any feedback!
I like issue tracking that is central to code browsing/change request flows (e.g. Github Issues). These issues can then become code change requests to the requirements testing code, and then to the implementation code, then accepted and become part of the project. As products mature, product ownership folks must periodically review and prune existing requirements they no longer care about, and devs can then refactor as desired.
I don't like over-wrought methodologies built around external issue trackers. I don't like tests that are overly-concerned with implementation detail or don't have any clear connection to a requirement that product ownership actually cares about. "Can we remove this?" "Who knows, here's a test from 2012 that needs that, but no idea who uses it." "How's the sprint board looking?" "Everything is slipping like usual."
- we track work (doesn't matter where), each story has a list of "acceptance criteria", for example: 'if a user logs in, there's a big red button in the middle of the screen, and if the user clicks on it, then it turns to green'
- there's one pull request per story
- each pull request contains end-to-end (or other, but mostly e2e) tests that prove that all ACs are addressed, for example the test logs in as a user, finds the button on the screen, clicks it, then checks whether it turned green
- even side effects like outgoing emails are verified
- if the reviewers can't find tests that prove that the ACs are met, then the PR is not merged
- practically no manual testing as anything that a manual tester would do is likely covered with automated tests
- no QA team
And we have a system that provides us a full report of all the tests and links between tests and tickets.
We run all the tests for all the pull requests, that's currently something like 5000 end-to-end test (that exercise the whole system) and much more other types of tests. One test run for one PR requires around 50 hours of CPU time to finish, so we use pretty big servers.
All this might sound a bit tedious, but this enables practically CICD for a medical system. The test suite is the most complete and valid specification for the system.
(we're hiring :) )
The PO has to make the hard decision about what to work on and when. He/She must understand the product deeply and be able to make the hard decisions. Also the PO should be able to test the system to accept the changes.
Furthermore. You don't really need to have endless lists of requirements. The most important thing to know is what is the next thing that you have to work on.
* https://github.com/doorstop-dev/doorstop * https://github.com/strictdoc-project/strictdoc
Of course requirements can be linked to test cases and test execution reports, based on a defined and described process.
How to build test cases is another story.
I think the common answer is you don't use a requirements management tool, unless it's a massive system, with System Engineers who's whole job is to manage requirements.
Some combination of tech specs and tests are the closest you'll get. Going back to review the original tech spec (design doc, etc) of a feature is a good way to understand some of the requirements, but depending on the culture it may be out of date.
Good tests are a bit closer to living requirements. They can serve to document the expected behavior, and check the system for that behavior
My opinion would be to not use all the fancy features that automatically tie issues to merge requests, releases, epics, pipelines etc... it's way to much for a small team that is not doing any type of management.
Just use some basic labels, like "bug" or "feature" and then use labels to denote where they are in the cycle such as "sprinted", "needs testing" etc. Can use the Boards feature if you want something nice to look at. Can even assign weights and estimates.
You can tie all the issues of a current sprint to a milestone, call the milestone a version or w/e and set a date. Now you have history of features/bugs worked on for a version.
In terms of testing, obviously automated tests are best and should just be a requirement built into every requirement. Some times though tests must be done manually, and in that case attach a word doc or use the comments feature on an issue for the "test plan".
Here are some options that I've seen in practice.
A: put everything in your repository in a structured way:
pros: - consistent - actually used in practice by the engineers
cons: - hard to work with for non-developers - too much detail for audits - hard to combine with documents / e-signatures
B: keep separate word documents
pros: - high level, readable documentation overview - works with auditor workflows - PM's can work with these documents as well
cons: - grows to be inconsistent with your actual detailed requirements - hard to put in a CI/CD pipeline
A whole different story is the level of details that you want to put in the requirements. Too much detail and developers feel powerless, too little detail and the QA people feel powerless.
For simple microservice-type projects I've found a .md file, or even mentioning the requirements in the main README.md to be sufficient.
I think it's important to track requirements over the lifetime of the project. Otherwise you'll find devs flip-flopping between different solutions. E.g. in a recent project we were using an open-source messaging system but it wasn't working for us so we moved to a cloud solution. I noted in the requirements that we wanted a reliable system, and cost and cloud-independence wasn't an important requirement. Otherwise, in two years if I'm gone and a new dev comes on board, they might ask "why are we using proprietary tools for this, why don't we use open source" and spend time refactoring it. Then two years later when they're gone a new dev comes along "this isn't working well, why aren't we using cloud native tools here"....
Also important to add things that aren't requirements, so that you can understand the tradeoffs made in the software. (In the above case, for example, cost wasn't a big factor, which will help future devs understand "why didn't they go for a cheaper solution?")
Also, if there's a bug, is it even a bug? How do you know if you don't know what the system is supposed to do in the first place?
Jira tickets describe individual changes to the system. That's fine for a brand new system. But after the system is 10 years old, you don't want to have to go through all the tickets to work out what the current desired state is.
Your job is to do what is being asked of you and not screw it up too much.
If they wanted to track requirements, they'd already track them.
People have very fragile egos - if you come in as a junior dev and start suggesting shit - they will not like that.
If you come in as a senior dev and start suggesting shit, they'll not like it, unless your suggestion is 'how about I do your work for you on top of my work, while you get most or all of the credit'.
That is the only suggestion most other people are interested in.
Source: been working for a while.
Another useful tool to use in conjunction to the above is running code coverage on each branch to ensure you don't have new code coming in that is not covered by unit tests.
For tracking requirement to test, I'm using testlink (testlink.org) where you can enter your requirements from existing documents and linked them to test cases. The documentation is not perfect, better start here:
https://www.guru99.com/testlink-tutorial-complete-guide.html
You can go to Bitnami to get a docker image.
I wish this wasn't the case but it's been the reality in my experience and I've been developing software for 20+ years. I'm the rare developer that will ask questions and write things down. And if it seems necessary I will even model it formally and write proofs.
Some industries it is required in some degree. I've worked in regulated industries where it was required to maintain Standard Operating Procedures documents in order to remain compliant with regulators. These documents will often outline how requirements are gathered, how they are documented, and include forms for signing off that the software version released implements them, etc. There are generally pretty stiff penalties for failing to follow procedure (though for some industries I don't think those penalties are high enough to deter businesses from trying to cut corners).
In those companies that had to track requirements we used a git repository to manage the documentation and a documentation system generated using pandoc to do things like generate issue-tracker id's into the documentation consistently, etc.
A few enterprising teams at Microsoft and Amazon are stepping up and building tooling that automates the process of checking a software implementation of a formal specification. For them mistakes that lead to security vulnerabilities or missed service level objectives can spell millions of dollars in losses. As far as I'm aware it's still novel and not a lot of folks are talking about it yet.
I consider myself an advocate for formal methods but I wouldn't say that it's a common practice. The opinions of the wider industry about formal methods are not great (and that might have something to do with the legacy of advocates past over-promising and under-delivering). If anything at least ask questions and write things down. The name of the game is to never be fooled. The challenge is that you're the easiest person to fool. Writing things down, specifications and what not, is one way to be objective with yourself and overcome this challenge.
For requirement, use any kind of issue tracker and connect your commit with issues. Jira, people here hate it for various reason. But it get the job done. Otherwise GitHub issue would (there are problems with GitHub issues, e.g. cross repo issue tracking in a single place. That's another story)
For QA, you want your QA be part of the progress tracking and have it reflected in Jira/GitHub commit.
One thing I think is of equal importance, if not more, is how the code you delivered is used in the wild. Some sort of analytics.
Zoom out a bit, requirement is what you THINK the user want. QA is about whether your code CAN perform what you think the user want plus some safeguard. Analytics is how the user actually perform in real world
A bit off topic here, QA and analytics is really two side of the same coin. Yet people treat it as two different domains, two set of tools. On one hand, the requirement is verified manually through hand crafted test cases. On the other hand, production behavioural insight is not transformed into future dev/test cases effectively. It is still done manually, if any.
Think about how many time a user wander into a untested undefined interaction that escalated into a support ticket. I'm building a single tool to bridge the gap between product(requirement and production phase) and quality (testing)
Anyway, no point on tracking low-quality requirements that end up being redefined as you build the airplane in flight.
See the post at https://jiby.tech/post/gherkin-features-user-requirements/
I made this into a series of post about gherkin, where I introduce people to Cucumber tooling and BDD ideals, and show an alternative low-tech for cucumber in test comments.
As for actually doing the tracking of feature->test, aside from pure Cucumber tooling, I recommend people have a look at sphinxcontrib-needs:
https://sphinxcontrib-needs.readthedocs.io/en/latest/index.h...
Define in docs a "requirement" bloc with freeform text (though I put gherkin in it), then define more "specifications", "tests" etc with links to each other, and the tool does the graph!
Combined with the very alpha sphinx-collections, it allows jinja templates from arbitrary data:
Write gherkin in features/ folder, make the template generate for each file under that folder an entry of sphinxcontrib-needs with the gherkin source being quoted!
The first piece is the specification documents, which are simple word docs with a predictable format. These cover how the software SHOULD be implemented. From these documents, we automatically generate the mission critical code, which ensures it matches what we say it does in the document. The generator is very picky about the format, so you know right away if you've made a mistake in the spec document. These documents are checked into a repo, so we can tag version releases and get (mostly) reproducible builds.
The second piece is the verification test spreadsheet. We start this by stating all assumptions we make about how the code should work, and invariants that must hold. These then are translated into high level requirements. Requirements are checked using functional tests, which consist of one or many verification tests.
Each functional test defines a sequence of verification tests. Each verification test is a single row in a spreadsheet which contains all the inputs for the test, and the expected outputs. The spreadsheet is then parsed and used to generate what essentially amounts to serialized objects, which the actual test code will use to perform and check the test. Functional test code is handwritten, but is expected to handle many tests of different parameters from the spreadsheet. In this way, we write N test harnesses, but get ~N*M total tests, M being average number of verification tests per functional test.
All test outputs are logged, including result, inputs, expected outputs, actual outputs, etc. These form just a part of future submission packages, along with traceability reports we can also generate from the spreadsheet.
All of this is handled with just one Google Doc spreadsheet and a few hundred lines of Python, and saves us oodles while catching tons of bugs. We've gotten to the point where any changes in the spec documents immediately triggers test failures, so we know that what we ship is what we actually designed. Additionally, all the reports generated by the tests are great V&V documentation for regulatory submissions.
In the future, the plan is to move from word docs + spreadsheets to a more complete QMS (JAMA + Jira come to mind), but at the stage we are at, this setup works very well for not that much cost.
As an example, you could have a test that calls some public API and checks that you get the expected response. Assuming your requirement cares about the public API, or the functionality it provides.
I've tried to be as detailed as I can without knowing much about your application: assumptions were made, apply salt as needed.
Personally, I like having a test-suite be the documentation for what requirements exist. Removing or significantly modifying a test should always be a business decision. Your local Jira guru will probably disagree
Top-level requirements are system requirements and each of them should be tested through system tests. This usually then drips through the implementation layers from system tests to integration tests, to unit tests.
Regression testing really is just running your test suite every time something changes in order to check that everything still works fine.
Why? Why would you like that? Why you?
If it's not happening, the business doesn't care. Your company is clearly not in a tightly regulated industry. What does the business care about? Better to focus on that instead of struggling to become a QA engineer when the company didn't hire you for that.
Generally, if the team wants to start caring about that, agree to:
1. noting whatever needs to be tested in your tracker
2. writing tests for those things alongside the code changes
3. having code reviews include checking that the right tests were added, too
4. bonus points for making sure code coverage never drops (so no new untested code was introduced)
2. Architecture and design docs explain the “how” to engineering.
3. The work gets broken down to stories and sub-tasks and added to a Scrum/Kanban board. I like Jira, but have also used Asana and Trello.
Testing is just another sub-task, and part of the general definition of some for a story. For larger projects, a project-specific test suite may be useful. Write failing tests. Once they all pass, you have an indication that the project is nearly done.
You can skip to #3 if everyone is aligned on the goals and how you’ll achieve them.
then only bump up to more complexity or superficiality as the benefits exceeds the cost/pain. for example, a spreasheet. perhaps a Google doc
if you're lucky enough to have any reqs/specs which have a natural machine-friendly form, like an assertion that X shall be <= 100ms then go ahead and express that in a structured way then write test code which confirms it, as part of a suite of assertions of all reqs which can be test automated like this
What sort of tests you want depends a lot on your system. If you're working on some data processing system where you can easily generate many examples of test input then you'll probably get lots of ROI from setting up lots of regression tests that cover loads of behaviour. If it's a complex system involving hardware or lots of clicking in the UI then it can be very good to invest in that setup but it can be expensive in time and cost. In that case, focus on edge or corner cases.
Then in terms of how you use it, you have a few options depending of the types of test:
- you can run through the tests manually every time you do a release (i.e. manual QA) - just make a copy of the spreadsheet and record the results as you go and BAM you have a test report
- if you have some automated tests like pytest going on, then you could use the mark decorator and tag your tests with the functional test ID(s) that they correspond to, and even generate a HTML report at the end with a pass/fail/skip for your requirements
I have seen requirements captured in markdown files, spreadsheets, ticket management systems like Redmine, Pivotal, Jira, GitLab, Azure Devops, GitHub Issues, and home grown systems.
If I had to start a new medical device from scratch today, I would use Notion + https://github.com/innolitics/rdm to capture user needs, requirements, risks, and test cases. Let me know if there is interest and I can make some Notion templates public. I think the ability to easily edit relations without having to use IDs is nice. And the API makes it possible to dump it all to yaml, version control and generate documentation for e-signature when you need it. Add on top of that an easy place to author documentation, non-software engineer interoperability, discoverable SOPs, granular permissions, and I think you have a winning combination.
yshrestha@innolitics.com
I have never seen a requirements tracking software that worked well for large systems with lots of parts. Tracing tests to requirements and monitoring requirements coverage is hard. For projects of the size I work on I think more and more that writing a few scripts that work on some JSON files may be less effort and more useful than customizing commercial systems.
Of course, don't make the mistake I am guilty of sometimes making, and think you now know better than everyone else just because you've read some things others have not. Gain knowledge, but stay focused on loving the people around you. ("Loving" meaning in the Christian sense of respect, not being selfish, etc; sorry if that is obvious)
Regarding requirements - they are always a live discussion, not just a list of things to do. Do not be surprised when they change, instead plan to manage how they change.
Regarding testing - think of testing as headlights on a car; they show potential problems ahead. Effectively all automated testing is regression testing. Unit tests are great for future developers working on that codebase, but no amount of unit tests will show that a SYSTEM works. You also need integration and exploratory testing. This isn't a matter of doing it right or wrong, it's a matter of team and technical maturity.
A bug is anything that is unexpected to a user. I'm sure this will be controversial, and I'm fine with that.
I really like end-to-end tests for this, because it tests the system from a user perspective, which is how many requirements are actually coming in, not how they are implemented internally. I also like to write tests for things that can't actually break indirectly. But it makes it so that someone who changes e.g. some function and thus breaks the test realizes that this is an explicit prior specification that they are about to invalidate and might want to double check with someone.
The product owner writes User Stories in a specific human-and-machine readable format (Given/when/then). The engineers build the features specified. Then the test author converts the “gherkin” spec into runnable test cases. Usually you have these “three amigos” meet before the product spec is finalized to agree that the spec is both implementable and testable.
You can have a dedicated “test automation” role or just have an engineer build the Acceptance Tests (I like to make it someone other than building the feature so you get two takes on interpreting the spec). You keep the tester “black-box” without knowing the implementation details. At the end you deliver both tests and code, and if the tests pass you can fee pretty confident that the happy path works as intended.
The advantage with this system is product owners can view the library of Gherkin specs to see how the product works as the system evolves. Rather than having to read old spec documents, which could be out of date since they don’t actually get validated against the real system.
A good book for this is “Growing Object-Oriented Software, Guided by Tests” [1], which is one of my top recommendations for junior engineers as it also gives a really good example of OOP philosophy.
The main failure mode I have seen here is not getting buy-in from Product, so the specs get written by Engineering and never viewed by anyone else. It takes more effort to get the same quality of testing with Gherkin, and this is only worthwhile if you are reaping the benefit of non-technical legibility.
All that said, if you do manual release testing, a spreadsheet with all the features and how they are supposed to work, plus a link to where they are automatically tested, could be a good first step if you have high quality requirements. It will be expensive to maintain though.
1: https://smile.amazon.com/Growing-Object-Oriented-Software-Ad...
For a small team you can probably build a workable process in Microsoft Access. I use access to track my own requirements during the drafting stage.
https://www.ansys.com/products/safety-analysis/ansys-medini-...
1. Requirements first written in Excel, later imported to Jama and later imported to HP QC/ALM for manual tests
Pros: Test reports in HP QC helped protected against an IT solution which was not on par with what needed and requested
Cons: Tests where not helping the delivery - only used as a "defence", requirements got stale, overall cumbersome to keep two IT systems (Jama, HP QC) up to date
---
2. Jira for implementation stories, with some manual regression tests in TestRail and automated regression tests with no links besides Jira issue ID in commit Polarion was used by hardware and firmware teams but not software teams.
Pros: Having a structured test suite in TestRail aided my work on doing release testing, more lightweight than #1
Cons: Lots of old tests never got removed/updated, no links to requirements in Jira/Polarion for all tests (thereby losing traceability)
---
3. Jira with Zephyr test management plugin for manual tests, automated tests with no links besides Jira issue ID in commit
Pros: Relative lightweight process, since plugin to Jira was used
Cons: Test cases in Zephyr was not updated enough by previous team members
---
4. Enterprise tester for requirements/test plans, Katalon for e2e tests by separate QA team With automatic tests with Jira issue ID in commit (no links to Enterprise tester) inside team
Pros: Again, rather lightweight when it comes to automated regression tests inside team
Cons: Process not optimal, Enterprise tester only for documentation but no actual testing
---
Today, there are good practices which helps building quality in - DevOps, GitOps, automatic tests (on several levels), statistical code analysis, metrics from productions... Try to leverage those to help guide what tests needs to be written.
Many times requirements/user stories are incomplete, no longer valid or simply wrong. Or a PO may lack some written communication skills.
Overall, I want to focus on delivering value (mainly through working software) rather than documenting too much so I prefer a lightweight process - issue ID on commit with the automated tests. Bonus points if you use eg markers/tags/whatever in test framework like JUnit/pytest to group and link to eg Jira issue ID.
Requirements are codified into test cases. After signoff of the spec/design/test plan is complete, there's no going back and checking.
Based on our experience with some heavyweight requirements management tools we tried to develop quite the opposite a simple requirements management tool. It is not open source but at least it has a open json format - good for git/svn, integration with Jira, ReqIF export/import, quick definition of requirements, attributes, links and various views. See https://reqview.com
Bonus point: you can run the code coverage with all the tests for a certain feature and see which code is responsible for supporting this feature.
Used that 15-20 years ago and loved it. Any present day insight on this?
I've been writing software professional for over 20 years, in all kinds of different industries. I've had to handle thousands of lines of specs, with entire teams of manual testers trying to check them. Worked at places where all requirements were executable, leading to automated test suites that were easily 10 times bigger than production code. Other places just hope that the existing code was added for a reason, and at best keep old working tickets. And in other places, we've had no tracking whatsoever, and no tests. I can't say that anyone was wrong.
Ultimately all practices are there to make sure that you produce code that fits the purpose. If your code is an API with hundreds of thousands of implementers, which run billions of dollars a month through it, and you have thousands of developers messing with said API, the controls that you'll need to make sure the code fits purpose is going to be completely different than what you are going to need if, say, you are working on an indie video game with 5 people.
Having to long terms requirements tracking can be very valuable too! A big part of documentation, executable or not, is that it has to be kept up to date, and be valuable: It's a pretty bad feeling to have to deal with tens of thousands of lines of code to support a feature nobody is actually using. Reading documentation that is so out of date that you will end up with the completely wrong idea, and lose more time than if you had spent the same time reading a newspaper. Every control, every process, has its costs along with its advantages, and the right tradeoff for you could have absolutely nothing to do with the right tradeoff somewhere else. I've seen plenty of problems over the years precisely because someone with responsibility changes organizations to a place that is very different, and attempts to follow the procedures that made a lot of sense in the other organization, but are just not well fit for their new destination.
So really, if your new small team is using completely different practices than your previous place, which was Enterprise enough to use any IBM Rational product, I would spend quite a bit of time trying to figure out why your team is doing what they do, make sure that other people agree that the problems that you think you are having are the same other people in the team are seeing, and only then start trying to solve them. Because really, even in a small team, the procedures that might make sense for someone offering a public API, vs someone making a mobile application that is trying to gain traction in a market would be completely different.
TLDR adding structure isn’t always the answer. Your team/org needs to be open to that.