HACKER Q&A
📣 lovehatesoft

How do you keep track of software requirements and test them?


I'm a junior dev that recently joined a small team which doesn't seem to have much with regards to tracking requirements and how they're being tested, and I was wondering if anybody has recommendations.

Specifically, I would like to track what the requirements/specifications are, and how we'll test to make sure they're met? Which I don't know if this could be a mix of unit and integration/regression tests? Honestly though if this is maybe even the wrong track to take, I'd appreciate feedback on what we could be doing instead.

I used IBM Rational DOORS at a previous job and thought it really helped for this, but with a small team I don't think it's likely they'll spring for it. Are there open source options out there, or something else that's easy? I thought we could maybe keep track in a spreadsheet (this to match DOORS?) or some other file, but I'm sure there would be issues with that as we added to it. Thanks for any feedback!


  👤 flyingfences Accepted Answer ✓
In a safety-critical industry, requirements tracking is very important. At my current employer, all of our software has to be developed and verified in accordance with DO-178 [0]. We have a dedicated systems engineering team who develop the system requirements from which we, the software development team, develop the software requirements; we have a dedicated software verification team (separate from the development team) who develop and execute the test suite for each project. We use Siemens's Polarion to track the links between requirements, code, and tests, and it's all done under the supervision of an in-house FAA Designated Engineering Representative. Boy is it all tedious, but there's a clear point to it and it catches all the bugs.

[0] https://en.wikipedia.org/wiki/DO-178C


👤 lotyrin
When it's technically feasible, I like every repo having along side it tests for the requirements from an external business user's point of view. If it's an API then the requirements/tests should be specified in terms of API, for instance. If it's a UI then the requirements should be specified in terms of UI. You can either have documentation blocks next to tests that describe things in human terms or use one of the DSLs that make the terms and the code the same thing if you find that ergonomic for your team.

I like issue tracking that is central to code browsing/change request flows (e.g. Github Issues). These issues can then become code change requests to the requirements testing code, and then to the implementation code, then accepted and become part of the project. As products mature, product ownership folks must periodically review and prune existing requirements they no longer care about, and devs can then refactor as desired.

I don't like over-wrought methodologies built around external issue trackers. I don't like tests that are overly-concerned with implementation detail or don't have any clear connection to a requirement that product ownership actually cares about. "Can we remove this?" "Who knows, here's a test from 2012 that needs that, but no idea who uses it." "How's the sprint board looking?" "Everything is slipping like usual."


👤 sz4kerto
What we do:

- we track work (doesn't matter where), each story has a list of "acceptance criteria", for example: 'if a user logs in, there's a big red button in the middle of the screen, and if the user clicks on it, then it turns to green'

- there's one pull request per story

- each pull request contains end-to-end (or other, but mostly e2e) tests that prove that all ACs are addressed, for example the test logs in as a user, finds the button on the screen, clicks it, then checks whether it turned green

- even side effects like outgoing emails are verified

- if the reviewers can't find tests that prove that the ACs are met, then the PR is not merged

- practically no manual testing as anything that a manual tester would do is likely covered with automated tests

- no QA team

And we have a system that provides us a full report of all the tests and links between tests and tickets.

We run all the tests for all the pull requests, that's currently something like 5000 end-to-end test (that exercise the whole system) and much more other types of tests. One test run for one PR requires around 50 hours of CPU time to finish, so we use pretty big servers.

All this might sound a bit tedious, but this enables practically CICD for a medical system. The test suite is the most complete and valid specification for the system.

(we're hiring :) )


👤 corpMaverick
Let the product owner (PO) handle them.

The PO has to make the hard decision about what to work on and when. He/She must understand the product deeply and be able to make the hard decisions. Also the PO should be able to test the system to accept the changes.

Furthermore. You don't really need to have endless lists of requirements. The most important thing to know is what is the next thing that you have to work on.


👤 5440
I review software for at least 3-5 companies per week as part of FDA submission packages. The FDA requirements require traceability between reqs and the validation. While many small companies just use excel spreadsheets for traceability, the majority of large companies seem to use JIRA tickets alongside confluence. While those arent the only methods, they seem to be 90% of the packages I review.

👤 stefanoco
Zooming into "requirements management" (and out of "developing test cases") there's a couple of Open Source projects that address specifically this important branch of software development. I like both approaches and I think they might be used in different situations. By the way, the creators of these two projects are having useful conversations on aspects of their solutions so you might want to try both and see what's leading from your point of view.

* https://github.com/doorstop-dev/doorstop * https://github.com/strictdoc-project/strictdoc

Of course requirements can be linked to test cases and test execution reports, based on a defined and described process.

How to build test cases is another story.


👤 zild3d
I was at lockheed martin for a few years where Rational DOORS was used. Now at a smaller startup (quite happy to never touch DOORS again)

I think the common answer is you don't use a requirements management tool, unless it's a massive system, with System Engineers who's whole job is to manage requirements.

Some combination of tech specs and tests are the closest you'll get. Going back to review the original tech spec (design doc, etc) of a feature is a good way to understand some of the requirements, but depending on the culture it may be out of date.

Good tests are a bit closer to living requirements. They can serve to document the expected behavior, and check the system for that behavior


👤 jcon321
Gitlab. Just use Issues you can do everything with the free tier. (It's called "Issues workflow" - gitlab goes a little overboard though, but I'd look at pictures of peoples issues list to get examples).

My opinion would be to not use all the fancy features that automatically tie issues to merge requests, releases, epics, pipelines etc... it's way to much for a small team that is not doing any type of management.

Just use some basic labels, like "bug" or "feature" and then use labels to denote where they are in the cycle such as "sprinted", "needs testing" etc. Can use the Boards feature if you want something nice to look at. Can even assign weights and estimates.

You can tie all the issues of a current sprint to a milestone, call the milestone a version or w/e and set a date. Now you have history of features/bugs worked on for a version.

In terms of testing, obviously automated tests are best and should just be a requirement built into every requirement. Some times though tests must be done manually, and in that case attach a word doc or use the comments feature on an issue for the "test plan".


👤 jtwaleson
This is super interesting and incredibly difficult. In some regulated environments, like medical devices, you MUST keep track of requirements in your product's technical documentation. I work on a Software Medical Device product and have seen tons of workflows at similar companies. There are many different approaches to this and none that I have seen work really well. In my view this field is ripe for disruption and would benefit from standardization and better tooling.

Here are some options that I've seen in practice.

A: put everything in your repository in a structured way:

pros: - consistent - actually used in practice by the engineers

cons: - hard to work with for non-developers - too much detail for audits - hard to combine with documents / e-signatures

B: keep separate word documents

pros: - high level, readable documentation overview - works with auditor workflows - PM's can work with these documents as well

cons: - grows to be inconsistent with your actual detailed requirements - hard to put in a CI/CD pipeline

A whole different story is the level of details that you want to put in the requirements. Too much detail and developers feel powerless, too little detail and the QA people feel powerless.


👤 adrianmsmith
I think it's important to keep requirements in Git along with the source code. That way when you implement a new feature you can update the requirements and commit it along with the code changes. When the PR is merged, code and requirements both get merged (no chance to forget to update e.g. a Confluence document). Each branch you check out is going to have the requirements that the code in that branch is supposed to implement.

For simple microservice-type projects I've found a .md file, or even mentioning the requirements in the main README.md to be sufficient.

I think it's important to track requirements over the lifetime of the project. Otherwise you'll find devs flip-flopping between different solutions. E.g. in a recent project we were using an open-source messaging system but it wasn't working for us so we moved to a cloud solution. I noted in the requirements that we wanted a reliable system, and cost and cloud-independence wasn't an important requirement. Otherwise, in two years if I'm gone and a new dev comes on board, they might ask "why are we using proprietary tools for this, why don't we use open source" and spend time refactoring it. Then two years later when they're gone a new dev comes along "this isn't working well, why aren't we using cloud native tools here"....

Also important to add things that aren't requirements, so that you can understand the tradeoffs made in the software. (In the above case, for example, cost wasn't a big factor, which will help future devs understand "why didn't they go for a cheaper solution?")

Also, if there's a bug, is it even a bug? How do you know if you don't know what the system is supposed to do in the first place?

Jira tickets describe individual changes to the system. That's fine for a brand new system. But after the system is 10 years old, you don't want to have to go through all the tickets to work out what the current desired state is.


👤 alexashka
As a junior dev, this isn't your job.

Your job is to do what is being asked of you and not screw it up too much.

If they wanted to track requirements, they'd already track them.

People have very fragile egos - if you come in as a junior dev and start suggesting shit - they will not like that.

If you come in as a senior dev and start suggesting shit, they'll not like it, unless your suggestion is 'how about I do your work for you on top of my work, while you get most or all of the credit'.

That is the only suggestion most other people are interested in.

Source: been working for a while.


👤 scottyah
We use an issue tracking system like Jira, Trello, Asana, etc and each "ticket" is a unique identifier followed by a brief description. You can add all other sorts of labels, descriptions, etc to better map to the requirements you get. Next, all git branches are named the exact same way as the corresponding ticket. Unit tests are created under the same branch. After getting PR'd in, the code and unit tests can always be matched up to the ticket and therefore the requirement. For us, this system is good enough to replace the usual plethora of documentation the military requires. It does require strict following that can take extra time sometimes, but all devs on my team prefer it to writing more robust documentation.

Another useful tool to use in conjunction to the above is running code coverage on each branch to ensure you don't have new code coming in that is not covered by unit tests.


👤 damorin
With a small team, I'm using an open source tool called reqflow (https://goeb.github.io/reqflow/) to track requirement to source code with a Doxygen keyword //! \ref RQ-xxxx. It's generating a traceability matrix and is quite simple to use (perfect for a small team). In my case, I'm using grep on the source code to create the traceability matrix.

For tracking requirement to test, I'm using testlink (testlink.org) where you can enter your requirements from existing documents and linked them to test cases. The documentation is not perfect, better start here:

https://www.guru99.com/testlink-tutorial-complete-guide.html

You can go to Bitnami to get a docker image.


👤 agentultra
Depends on the industry. In most web services, applications, and desktop software shops; you don't. You track them informally through various tests your team may or may not maintain (ugh) and you'll hardly ever encounter any documentation or specification, formal or informal of any kind, ever.

I wish this wasn't the case but it's been the reality in my experience and I've been developing software for 20+ years. I'm the rare developer that will ask questions and write things down. And if it seems necessary I will even model it formally and write proofs.

Some industries it is required in some degree. I've worked in regulated industries where it was required to maintain Standard Operating Procedures documents in order to remain compliant with regulators. These documents will often outline how requirements are gathered, how they are documented, and include forms for signing off that the software version released implements them, etc. There are generally pretty stiff penalties for failing to follow procedure (though for some industries I don't think those penalties are high enough to deter businesses from trying to cut corners).

In those companies that had to track requirements we used a git repository to manage the documentation and a documentation system generated using pandoc to do things like generate issue-tracker id's into the documentation consistently, etc.

A few enterprising teams at Microsoft and Amazon are stepping up and building tooling that automates the process of checking a software implementation of a formal specification. For them mistakes that lead to security vulnerabilities or missed service level objectives can spell millions of dollars in losses. As far as I'm aware it's still novel and not a lot of folks are talking about it yet.

I consider myself an advocate for formal methods but I wouldn't say that it's a common practice. The opinions of the wider industry about formal methods are not great (and that might have something to do with the legacy of advocates past over-promising and under-delivering). If anything at least ask questions and write things down. The name of the game is to never be fooled. The challenge is that you're the easiest person to fool. Writing things down, specifications and what not, is one way to be objective with yourself and overcome this challenge.


👤 oumua_don17
Gitlab does have requirements management integrated but it’s not part of the free tier.

[1] https://docs.gitlab.com/ee/user/project/requirements/


👤 a_c
It depends where you are in your career and what the industry at the time offers.

For requirement, use any kind of issue tracker and connect your commit with issues. Jira, people here hate it for various reason. But it get the job done. Otherwise GitHub issue would (there are problems with GitHub issues, e.g. cross repo issue tracking in a single place. That's another story)

For QA, you want your QA be part of the progress tracking and have it reflected in Jira/GitHub commit.

One thing I think is of equal importance, if not more, is how the code you delivered is used in the wild. Some sort of analytics.

Zoom out a bit, requirement is what you THINK the user want. QA is about whether your code CAN perform what you think the user want plus some safeguard. Analytics is how the user actually perform in real world

A bit off topic here, QA and analytics is really two side of the same coin. Yet people treat it as two different domains, two set of tools. On one hand, the requirement is verified manually through hand crafted test cases. On the other hand, production behavioural insight is not transformed into future dev/test cases effectively. It is still done manually, if any.

Think about how many time a user wander into a untested undefined interaction that escalated into a support ticket. I'm building a single tool to bridge the gap between product(requirement and production phase) and quality (testing)


👤 CodeWriter23
My first suggestion, wait it out for an initial period and see how much the “requirements” align with the results. Based on my experience, about 3/4 of the time those stating the requirements have no idea what they actually want. I can usually increase the odds of the result matching the actual requirements by interviewing users / requirement generators.

Anyway, no point on tracking low-quality requirements that end up being redefined as you build the airplane in flight.


👤 FrenchyJiby
Having a similar discussion at work recently, I've written in favour of using Gherkin Features to gather high level requirements (and sometimes a bit of specifications), mostly stored in Jira Epics to clarify what's asked.

See the post at https://jiby.tech/post/gherkin-features-user-requirements/

I made this into a series of post about gherkin, where I introduce people to Cucumber tooling and BDD ideals, and show an alternative low-tech for cucumber in test comments.

As for actually doing the tracking of feature->test, aside from pure Cucumber tooling, I recommend people have a look at sphinxcontrib-needs:

https://sphinxcontrib-needs.readthedocs.io/en/latest/index.h...

Define in docs a "requirement" bloc with freeform text (though I put gherkin in it), then define more "specifications", "tests" etc with links to each other, and the tool does the graph!

Combined with the very alpha sphinx-collections, it allows jinja templates from arbitrary data:

Write gherkin in features/ folder, make the template generate for each file under that folder an entry of sphinxcontrib-needs with the gherkin source being quoted!

https://sphinx-collections.readthedocs.io/en/latest/


👤 idealmedtech
We're an FDA regulated medical device startup, with a pretty low budget for the moment. Our current setup is two pronged, in-house, and automated.

The first piece is the specification documents, which are simple word docs with a predictable format. These cover how the software SHOULD be implemented. From these documents, we automatically generate the mission critical code, which ensures it matches what we say it does in the document. The generator is very picky about the format, so you know right away if you've made a mistake in the spec document. These documents are checked into a repo, so we can tag version releases and get (mostly) reproducible builds.

The second piece is the verification test spreadsheet. We start this by stating all assumptions we make about how the code should work, and invariants that must hold. These then are translated into high level requirements. Requirements are checked using functional tests, which consist of one or many verification tests.

Each functional test defines a sequence of verification tests. Each verification test is a single row in a spreadsheet which contains all the inputs for the test, and the expected outputs. The spreadsheet is then parsed and used to generate what essentially amounts to serialized objects, which the actual test code will use to perform and check the test. Functional test code is handwritten, but is expected to handle many tests of different parameters from the spreadsheet. In this way, we write N test harnesses, but get ~N*M total tests, M being average number of verification tests per functional test.

All test outputs are logged, including result, inputs, expected outputs, actual outputs, etc. These form just a part of future submission packages, along with traceability reports we can also generate from the spreadsheet.

All of this is handled with just one Google Doc spreadsheet and a few hundred lines of Python, and saves us oodles while catching tons of bugs. We've gotten to the point where any changes in the spec documents immediately triggers test failures, so we know that what we ship is what we actually designed. Additionally, all the reports generated by the tests are great V&V documentation for regulatory submissions.

In the future, the plan is to move from word docs + spreadsheets to a more complete QMS (JAMA + Jira come to mind), but at the stage we are at, this setup works very well for not that much cost.


👤 amarant
I'd go for integration or end-to-end tests, depending on your application. Name each test after a requirement and make sure the test ensures the entirety of that requirement is fulfilled as intended(but avoid testing the implementation).

As an example, you could have a test that calls some public API and checks that you get the expected response. Assuming your requirement cares about the public API, or the functionality it provides.

I've tried to be as detailed as I can without knowing much about your application: assumptions were made, apply salt as needed.

Personally, I like having a test-suite be the documentation for what requirements exist. Removing or significantly modifying a test should always be a business decision. Your local Jira guru will probably disagree


👤 mytailorisrich
It's very useful to keep track of changes and to be able to have text to describe and explain, so for me the simplest tool would not be to use a spreadsheet but to create a git repo and to have one file per requirement, which can be grouped into categories through simple folders. You can still have a spreadsheet as top level to summarise as long as you remember to keep it up to date.

Top-level requirements are system requirements and each of them should be tested through system tests. This usually then drips through the implementation layers from system tests to integration tests, to unit tests.

Regression testing really is just running your test suite every time something changes in order to check that everything still works fine.


👤 drewcoo
> I would like to track what the requirements/specifications are, and how we'll test to make sure they're met

Why? Why would you like that? Why you?

If it's not happening, the business doesn't care. Your company is clearly not in a tightly regulated industry. What does the business care about? Better to focus on that instead of struggling to become a QA engineer when the company didn't hire you for that.

Generally, if the team wants to start caring about that, agree to:

1. noting whatever needs to be tested in your tracker

2. writing tests for those things alongside the code changes

3. having code reviews include checking that the right tests were added, too

4. bonus points for making sure code coverage never drops (so no new untested code was introduced)


👤 clintonb
1. Start with a product/project brief that explains the who, why, and what if the project at a high level to ensure the business is aligned.

2. Architecture and design docs explain the “how” to engineering.

3. The work gets broken down to stories and sub-tasks and added to a Scrum/Kanban board. I like Jira, but have also used Asana and Trello.

Testing is just another sub-task, and part of the general definition of some for a story. For larger projects, a project-specific test suite may be useful. Write failing tests. Once they all pass, you have an indication that the project is nearly done.

You can skip to #3 if everyone is aligned on the goals and how you’ll achieve them.


👤 jrowley
At my work we’ve needed a QMS and requirements traceability. We first implemented it in google docs via AODocs. Now we’ve moved to Jira + Zephyr for test management + Enzyme. I can’t say I recommend it.

👤 PebblesHD
Given the large, monolithic legacy nature of our backend, we use a combination of JIRA for feature tracking and each story gets a corresponding functional test implemented in CucumberJS, with the expectation that once a ticket is closed as complete, it is already part of ‘the test suite’ we run during releases. Occasionally the tests flake, it’s all just webdriver under the hood, so they require maintenance, but to cover the entire codebase with manual tests even if well documented would take days, so this is by far our preferred option.

👤 syngrog66
"vi reqs.txt" is ideal default baseline

then only bump up to more complexity or superficiality as the benefits exceeds the cost/pain. for example, a spreasheet. perhaps a Google doc

if you're lucky enough to have any reqs/specs which have a natural machine-friendly form, like an assertion that X shall be <= 100ms then go ahead and express that in a structured way then write test code which confirms it, as part of a suite of assertions of all reqs which can be test automated like this


👤 oddeyed
In a small team, I have found that a simple spreadsheet of tests can go a long way. Give it a fancy name like "Subcomponent X Functional Test Specification" and have one row per requirement. Give them IDs (e.g. FNTEST0001).

What sort of tests you want depends a lot on your system. If you're working on some data processing system where you can easily generate many examples of test input then you'll probably get lots of ROI from setting up lots of regression tests that cover loads of behaviour. If it's a complex system involving hardware or lots of clicking in the UI then it can be very good to invest in that setup but it can be expensive in time and cost. In that case, focus on edge or corner cases.

Then in terms of how you use it, you have a few options depending of the types of test:

- you can run through the tests manually every time you do a release (i.e. manual QA) - just make a copy of the spreadsheet and record the results as you go and BAM you have a test report

- if you have some automated tests like pytest going on, then you could use the mark decorator and tag your tests with the functional test ID(s) that they correspond to, and even generate a HTML report at the end with a pass/fail/skip for your requirements


👤 yshrestha
Been working as a consultant and engineer on FDA regulated software for about 8 years now. I have seen strategies from startups to huge companies.

I have seen requirements captured in markdown files, spreadsheets, ticket management systems like Redmine, Pivotal, Jira, GitLab, Azure Devops, GitHub Issues, and home grown systems.

If I had to start a new medical device from scratch today, I would use Notion + https://github.com/innolitics/rdm to capture user needs, requirements, risks, and test cases. Let me know if there is interest and I can make some Notion templates public. I think the ability to easily edit relations without having to use IDs is nice. And the API makes it possible to dump it all to yaml, version control and generate documentation for e-signature when you need it. Add on top of that an easy place to author documentation, non-software engineer interoperability, discoverable SOPs, granular permissions, and I think you have a winning combination.

yshrestha@innolitics.com


👤 spaetzleesser
We use a system called Cockpit. It’s terrible to say the least.

I have never seen a requirements tracking software that worked well for large systems with lots of parts. Tracing tests to requirements and monitoring requirements coverage is hard. For projects of the size I work on I think more and more that writing a few scripts that work on some JSON files may be less effort and more useful than customizing commercial systems.


👤 superjan
We have Word documents for requirements and (manual) test cases plus a self-written audit tools that checks the links between them, and converts them into hyperlinked and searchable HTML. It’s part of the dayly build. We are mostly happy with it. It is nice to know that we at any time can switch to a better tool (after all our requirements have an “API”), but we still have not found a better one.

👤 uticus
Since you mention you're a junior dev, wanted to suggest taking the long road and (1) listening to what others say (you're already doing that by asking here, but don't overlook coworkers much closer to you) and (2) start reading on the subject. Might I suggest Eric Evans "Domain-driven design" as a starting point, and don't stop there? Reading is not a quick easy path, but you will benefit from those that have gone before you.

Of course, don't make the mistake I am guilty of sometimes making, and think you now know better than everyone else just because you've read some things others have not. Gain knowledge, but stay focused on loving the people around you. ("Loving" meaning in the Christian sense of respect, not being selfish, etc; sorry if that is obvious)


👤 csours
You may be aware of this, but this is as much a social/cultural discussion as it is a technical discussion.

Regarding requirements - they are always a live discussion, not just a list of things to do. Do not be surprised when they change, instead plan to manage how they change.

Regarding testing - think of testing as headlights on a car; they show potential problems ahead. Effectively all automated testing is regression testing. Unit tests are great for future developers working on that codebase, but no amount of unit tests will show that a SYSTEM works. You also need integration and exploratory testing. This isn't a matter of doing it right or wrong, it's a matter of team and technical maturity.

A bug is anything that is unexpected to a user. I'm sure this will be controversial, and I'm fine with that.


👤 smoe
For smaller teams/projects I like to have as much of tracking requirements as possible as code because of how hard it is to keep anything written down in natural language up to date and having a useful history of it.

I really like end-to-end tests for this, because it tests the system from a user perspective, which is how many requirements are actually coming in, not how they are implemented internally. I also like to write tests for things that can't actually break indirectly. But it makes it so that someone who changes e.g. some function and thus breaks the test realizes that this is an explicit prior specification that they are about to invalidate and might want to double check with someone.


👤 theptip
One framework that is appealing but requires organizational discipline is Acceptance Testing with Gherkin.

The product owner writes User Stories in a specific human-and-machine readable format (Given/when/then). The engineers build the features specified. Then the test author converts the “gherkin” spec into runnable test cases. Usually you have these “three amigos” meet before the product spec is finalized to agree that the spec is both implementable and testable.

You can have a dedicated “test automation” role or just have an engineer build the Acceptance Tests (I like to make it someone other than building the feature so you get two takes on interpreting the spec). You keep the tester “black-box” without knowing the implementation details. At the end you deliver both tests and code, and if the tests pass you can fee pretty confident that the happy path works as intended.

The advantage with this system is product owners can view the library of Gherkin specs to see how the product works as the system evolves. Rather than having to read old spec documents, which could be out of date since they don’t actually get validated against the real system.

A good book for this is “Growing Object-Oriented Software, Guided by Tests” [1], which is one of my top recommendations for junior engineers as it also gives a really good example of OOP philosophy.

The main failure mode I have seen here is not getting buy-in from Product, so the specs get written by Engineering and never viewed by anyone else. It takes more effort to get the same quality of testing with Gherkin, and this is only worthwhile if you are reaping the benefit of non-technical legibility.

All that said, if you do manual release testing, a spreadsheet with all the features and how they are supposed to work, plus a link to where they are automatically tested, could be a good first step if you have high quality requirements. It will be expensive to maintain though.

1: https://smile.amazon.com/Growing-Object-Oriented-Software-Ad...


👤 Cerium
We used to use MKS and switched to Siemens Polarion a few years ago. I like Polarion. It has a very slick document editor with a decent process for working on links between risks, specifications, and tests. Bonus points for its ability to refresh your login and not loose data if you forget to save and leave a tab for a long time.

For a small team you can probably build a workable process in Microsoft Access. I use access to track my own requirements during the drafting stage.


👤 mtoddsmith
We use JIRA along with zypher test plugin which allows you to associate one or more test cases (aka list of steps) with your JIRA ticket. And tracks progress for each test case. Devs create the tickets and our QA creates the test cases. Docs and requirements come from all different departments and in all kinds of different formats so we just include those as links or attachments in the JIRA tickets.

👤 airbreather
Depending on how rigorous you want to be, for over $20k a year minimum you can use Medini, but it's pretty hard core.

https://www.ansys.com/products/safety-analysis/ansys-medini-...


👤 boopboopbadoop
Writing stories/tasks in such a way that each acceptance criteria is something that is testable, then having a matching acceptance test for each criteria. Using something like Cucumber helps match the test to the criteria since you can describe steps in a readable format.

👤 tommymanstrom
My experience is mixed, non safety-critical industries (but some regulated):

1. Requirements first written in Excel, later imported to Jama and later imported to HP QC/ALM for manual tests

Pros: Test reports in HP QC helped protected against an IT solution which was not on par with what needed and requested

Cons: Tests where not helping the delivery - only used as a "defence", requirements got stale, overall cumbersome to keep two IT systems (Jama, HP QC) up to date

---

2. Jira for implementation stories, with some manual regression tests in TestRail and automated regression tests with no links besides Jira issue ID in commit Polarion was used by hardware and firmware teams but not software teams.

Pros: Having a structured test suite in TestRail aided my work on doing release testing, more lightweight than #1

Cons: Lots of old tests never got removed/updated, no links to requirements in Jira/Polarion for all tests (thereby losing traceability)

---

3. Jira with Zephyr test management plugin for manual tests, automated tests with no links besides Jira issue ID in commit

Pros: Relative lightweight process, since plugin to Jira was used

Cons: Test cases in Zephyr was not updated enough by previous team members

---

4. Enterprise tester for requirements/test plans, Katalon for e2e tests by separate QA team With automatic tests with Jira issue ID in commit (no links to Enterprise tester) inside team

Pros: Again, rather lightweight when it comes to automated regression tests inside team

Cons: Process not optimal, Enterprise tester only for documentation but no actual testing

---

Today, there are good practices which helps building quality in - DevOps, GitOps, automatic tests (on several levels), statistical code analysis, metrics from productions... Try to leverage those to help guide what tests needs to be written.

Many times requirements/user stories are incomplete, no longer valid or simply wrong. Or a PO may lack some written communication skills.

Overall, I want to focus on delivering value (mainly through working software) rather than documenting too much so I prefer a lightweight process - issue ID on commit with the automated tests. Bonus points if you use eg markers/tags/whatever in test framework like JUnit/pytest to group and link to eg Jira issue ID.


👤 wnolens
Interesting reading replies here. Way more formalities than I've ever engaged in.

Requirements are codified into test cases. After signoff of the spec/design/test plan is complete, there's no going back and checking.


👤 vaseko
disclosure: I'm involved in the product mentioned - https://reqview.com

Based on our experience with some heavyweight requirements management tools we tried to develop quite the opposite a simple requirements management tool. It is not open source but at least it has a open json format - good for git/svn, integration with Jira, ReqIF export/import, quick definition of requirements, attributes, links and various views. See https://reqview.com


👤 throwawayForMe2
If you are interested in a formal approach, Sparx Enterprise Architect is relatively inexpensive, and can model requirements, and provide traceability to test cases, or anything else you want to trace.

👤 TeeMassive
Tests with the Jira issue id in them. Simple, easy, scriptable.

Bonus point: you can run the code coverage with all the tests for a certain feature and see which code is responsible for supporting this feature.


👤 tmaly
I write have Gherkin use cases. It works well as it is plain English. This makes it easy to have in a wiki while also being part of a repo.

👤 softwaredoug
Even in the strictest settings, documentation has a shelf life. I don't trust anything that's not a test.

👤 stblack
How is Rational (IBM) Requisite Pro these days?

Used that 15-20 years ago and loved it. Any present day insight on this?


👤 hibikir
There's no good answer here to a question with so little context. What you should be doing, in a company we don't know anything about, could vary wildly.

I've been writing software professional for over 20 years, in all kinds of different industries. I've had to handle thousands of lines of specs, with entire teams of manual testers trying to check them. Worked at places where all requirements were executable, leading to automated test suites that were easily 10 times bigger than production code. Other places just hope that the existing code was added for a reason, and at best keep old working tickets. And in other places, we've had no tracking whatsoever, and no tests. I can't say that anyone was wrong.

Ultimately all practices are there to make sure that you produce code that fits the purpose. If your code is an API with hundreds of thousands of implementers, which run billions of dollars a month through it, and you have thousands of developers messing with said API, the controls that you'll need to make sure the code fits purpose is going to be completely different than what you are going to need if, say, you are working on an indie video game with 5 people.

Having to long terms requirements tracking can be very valuable too! A big part of documentation, executable or not, is that it has to be kept up to date, and be valuable: It's a pretty bad feeling to have to deal with tens of thousands of lines of code to support a feature nobody is actually using. Reading documentation that is so out of date that you will end up with the completely wrong idea, and lose more time than if you had spent the same time reading a newspaper. Every control, every process, has its costs along with its advantages, and the right tradeoff for you could have absolutely nothing to do with the right tradeoff somewhere else. I've seen plenty of problems over the years precisely because someone with responsibility changes organizations to a place that is very different, and attempts to follow the procedures that made a lot of sense in the other organization, but are just not well fit for their new destination.

So really, if your new small team is using completely different practices than your previous place, which was Enterprise enough to use any IBM Rational product, I would spend quite a bit of time trying to figure out why your team is doing what they do, make sure that other people agree that the problems that you think you are having are the same other people in the team are seeing, and only then start trying to solve them. Because really, even in a small team, the procedures that might make sense for someone offering a public API, vs someone making a mobile application that is trying to gain traction in a market would be completely different.


👤 pluc
JIRA? Confluence? *ducks*

👤 rubicon33
You should probably first assess whether or not your organization is open to that kind of structure. Smaller companies sometimes opt toward looser development practices since it’s easier to know who did what, and, the flexibility of looser systems is nice.

TLDR adding structure isn’t always the answer. Your team/org needs to be open to that.