I typically start writing extremely messy code which I strictly manually test until what I need to do is functionally 90% achieved. Think a single 2000+ LOC commit with tons of log statements and no tests whatsoever. Then if it’s for a personal project it stays like that, if it’s for a company I’m working for I’ll start splitting it up in smaller components (average under 500 LOC per commit) defining reasonable interfaces that are obvious in hindsight, add unit tests to each component, and submit for review. By the time I submit the review I typically have 5-20 chained commits that can be independently reasoned about. Strictly top down approach. I basically end up writing pretty much anything substantial twice, but it works for me.
Once I have that, I get detailed enough on the UI, either by writing some code, or understanding what information it needs to get from and send to the underlying parts. As soon as possible, I write an automated test that defines the behavior I want (which fails), then "move inward" to the next boundary, such as the Use Case/Service layer (I use Hexagonal Architecture), write a (failing) test at that layer, and then start implementing some behavior.
I'll refactor, which often leads to the creation of a new class with a bit of behavior (this is the "inside"). I'll TDD that new behavior until it's all Green. Then I'll go back out to the Use Case boundary and write more tests/code, TDDing at that level, with continuous refactoring to extract new classes.
Once I'm done at that boundary, I'll return to the Outside/UI test and if it passes, I'm done (and maybe I have some UI cleanup to do). If it fails, that tells me I need another spiral down the boundaries (toward the inside) and then back out.
At each boundary, I may pause and do some CRC Card (class-responsibility-collaborator) design work, if I feel I don't have a good sense of what I'll need ("do I need a Group class here, or will the existing Roles for a User be enough?").
I never do proofs of concept or throwaways or tests, because you can't test a whole architecture except by building it.
And I really don't care if each individual function is beautiful and perfect, it just has to be as good as I can get it without taking time away from unit tests, edge case handling, architecture, etc.
If the code is bad, I'll refactor it, if it's mediocre, I'll refactor it later, but I'm not going to rewrite and try to port over all the dozens of edge case handlers I've probably added, unless the whole thing is truly horrendous.
It is very messy and ugly to try to adapt something to have a UI with a completely different model under the hood, so I start with UI and workflow.
If there are a lot of unknowns then ‘build one to throw away’, ‘discover not invent’ and going in circles until you understand the simplest way to resolve circular dependencies are the way you do it.
I have been thinking about writing where sometimes I go through 10 or 15 iterations to figure out how to explain something in a way that gets across easily.
One of my bigger long-term projects right now is a word processor document templating engine. I've had to massively redesign a lot of the character set handling code, because the Real World of how, e.g. Microsoft Word determines the character set in current use is absolutely bonkers (there are situations where Word will interpret Windows-1252, which is pretty much ISO 8859-1 with extra characters in the 0x8_ and 0x9_ range, as Arabic for no reason other than a RTL marker).
That's all by long way of saying it's helpful to have a mental model of how you think the program should work, but you should absolutely count on having to revise that model (and the resulting design) as you learn more.
"Plans are worthless, but planning is everything." —President Eisenhower
For situations with a lot of breadth in possible approaches or a lot of edge cases, I like to have a reviewed design ahead of time. That or if I'm going to get a lot of people to work on it at the same time. For that, without pretty clear interfaces, everyone gets bottlenecked trying to make clashing updates to the interfaces
Give importance to the shapes/interconnectedness of the data and the processing/flow. Avoid focusing on code structure except in broad terms and don't make abstractions up-front. Prefer flatter call hierarchies (or loosely-coupled objects), avoid early/superficial 'refactoring' unless certain that it's a one-time, one-way meaningful factoring.
If I am a cofounder at a startup where the market isn't even defined yet, my priorities are to help discover the market. Then build an MVP off the immediate needs of the beachead market.
If I am an employee at an established company where there's a well defined, effective market, customer team, I work with them to get usecase definitions, then I write BDD/TDD and have rest of my engineering team help implement.
Are you looking for specific help? In that case, do mention the context and environment you operate in
Experience helps to avoid some of those obsolete branches.
The UX/UI is often the hardest in terms of making your SW the easiest or shortest way to get to your functionalities.
When a SW design hits a roadblock based on misordering of UI data entry or missing (or even too extraneous) of a user input, then reconciliation effort is made on SW/UX.