My question first is
a) has Hacker News/YC ever seen a startup fail because the codebase is so bad.
b) what is the best calculation to make when trading off code quality vs features?
c) do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?
Should we just be chucking shit at the wall and seeing what sticks? Do most startups bin v1 and jump straight to v2 once they have traction?
1. Writing quality code is not substantially slower in the first place, especially when you factor in debugging time. You just have to have the right habits from the get-go. People avoid writing quality code because they don't have and don't want to build these habits, not because it's inherently harder.
2. After the initial push, code quality makes it much easier to make broad changes, try new things and add quick features. This is exactly what you need when iterating on a product! Without it, you'll be wasting time dealing with production issues and bugs.
The only reason people say startups don't fail because of code quality is that code quality is never the proximate cause—you run out of funding because you couldn't find product-market fit. But would you have found product-market fit if you had been able to iterate faster, try more ideas out and didn't spend 50% of your time fighting fires? Almost definitely.
Pulling all-nighters dealing with production issues, spending weeks quashing bugs in a new feature and duct-taping hacks with more hacks is not heroic, it's self-sabotaging. Writing good code makes your own life easier, even on startup timeframes. (Hell, it makes your life easier even on hackathon timeframes!)
They don't care if the code is good or bad, as long as the app does what they need it to do and does it well.
So to answer your question: The code should be bad enough that it allows you to ship as fast as possible, but not so bad that the app doesn't work properly.
This can be a shock if you've been raised on a steady diet of HN posts and comments, Medium articles from opinionated and often highly critical programmers, and open-source projects that only accept the best quality code. No one likes to brag about writing proof-of-concept grade code, so you won't be hearing about it online or in public.
a) Yes, startups have failed because their product doesn't work properly or the product is full of bugs. However, startups don't fail because the codebase is ugly, or convoluted, or not following best practices. You might be surprised at how hacky many early startup codebases are.
b) Regarding the calculation of code quality vs. feature velocity: When in doubt, consult with the senior devs and your manager. Knowing when, where, and how to strike this trade off is one of the defining features of being a senior developer, in my opinion. In most cases, it comes down to estimating the negative impacts on future development. A core component that touches every part of the app should be more carefully designed than a single-use feature only 1% of your customers might ever use.
c) Regarding tests and clean code for V1: In short, the only thing that matters is getting traction in the early stages. Every day you spend writing tests or refactoring code to feel cleaner reduces your chances at getting that next funding round. In the early days, it's all about a proof of concept and getting customers so you can grow the company. You can't grow the company if you don't have investors and/or customers, so that perfect code may be doing more harm than good in the early days.
Few people know that when Uber started and the first $1M was raised, the apps were built by contractors. The app was bad, the code terrible - but even with a bad app, customers used it over taxis, that didn't even have a bad app. The business took off, the next round of funding came, as did the first few full-time engineers.
The first thing that the full-timers did was throw away the mess of a code, and rewrite the app. However, moving fast was still more important than quality. Launching in a new city needed to be done in a few weeks - if the ops team could mobilize a whole city in this time, engineering was expected to move fast as well. So while generally, forward-looking decisions were made, still, many-many shortcuts were added, most notably "The Ping", which was the backend sending over all state data to the client in a massive JSON object, ever 10 seconds. This was to speed up development, not having to make backward-compatible state changes all the time. It's something I'd cringe over today, but it did help moving fast, at the expense of loose contracts and lots of bandwidth usage that could have been avoided.
As the business proved to be successful, in year 3 or 4, reliability and quality started to be more of a focus: things like tests, listing, architecture, rollout best practices, and so on. A big push happened when in year 4 or 5 (I can't remember exactly) a sloppy change almost took down all of Uber's core systems at rushour. But for the first few years, quality took a relatively back seat. Was it worth it? I'd definitely say so. As another commenter noted, the customers of a startup do not buy code quality: they buy something that meets their needs and is good enough.
When a startup becomes wildly successful, you'll have the funds to pay off tech debt. Until then just make sure it doesn't suffocate you - otherwise pile it on, and move fast.
1) Bugs/outages that affect your customers
2) Hard to grok code that slows down onboarding of new staff
3) Features taking longer time to develop
How you weigh these risks is different from business to business. For a fintech startup a bug in the code could end up bankrupting the company. For a VC backed social network, being able to quickly onboard new hires is really important. For an app that supports say BLM protestors, time-to-market is everything.
In the great scheme of things, having a crappy codebase that makes money is a good problem to have.
> do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?
It only matters when bad code hurts your overall business velocity - what that means, only you can answer.
Nobody's writing tests for their purist aethestics, they're there to let you go faster - but there's an up-front cost you have to pay for them. Sometimes that's worth paying, sometimes the land grab is more important.
There's no single answer to this question.
Yet their users loved them, the numbers went up and up, investors lined up to take the founders to dinner and vie for the chance to pound millions of dollars up their asses. It was crazy. They built a half-pipe in the office, you know, for skateboards. They became a household name and IPO'd a few years ago.
The point is this all happened despite their garbage architecture and crappy code. Yet it would have all been much easier and cheaper to do it right the first time.
(Word to the wise, the founder/CEO wound up crying at his desk as the investors wrested the company from him. "Be careful what you wish for: you might get it.")
If the answer is "no", then the tolerance for bad code goes way up.
Either way, in the early stages of a startup, a great deal of the code will end up being throwaway, and the trick is sometimes knowing which things are important to get right upfront, and which things can be punted on.
Well-defined service boundaries help a lot. This doesn't mean going to microservices, but it does mean keeping things well-isolated and independent even in the same codebase. In effect, you can have "well-architected bad code" which will help you stay flexible even as you move quickly.
a) The codebase will always be "bad". There will always be things that need improving, testing, fixing, enhancing, revisiting.
b) The optimization function to determine code quality vs features has "user adoption" as one of the main inputs. If you are going to spend time and money working on code quality, but with no users, then whats the point? Are users interested but leaving your product because of bad quality - then patch the code to bring it right above the threshold of usability and maintenance - but not more. That may seem controversial - but you need to optimize your resources (time+money) in a startup
c) I dont know anything about YC startups. But I can tell you writing tests and bad code are not mutually exclusive. Having said that, you should think about testing - always.
Having been at this startup for 3+ years now - I can tell you we've gone through 3-4 iterations of the same thing already - each time designing for more scale, more developers and more users. So I think that may be just par for the course.
We went through a very tough journey ourselves. When I started the company, I wanted us to just use out of the box Rails. But some senior devs disagreed - we had huge disagreements about it. We ended up spending months building a complex SOA, only to find 3 years later that it wasn't a great implementation and rewriting it (now it's even more complex). Meanwhile, Shopify and others seem to be happily still using mostly stock Rails. And we're in a tough spot where finding developers who can work and be productive with our NIH-stack is quite challenging.
I agree with what the others are saying here. Customers aren't buying our code, they are buying our product/service. Code should not be "bad" (i.e. there should be tests, etc.) but as a startup, I think velocity is more important and we just have to weigh that. We can hack stuff temporarily to ship or do experiments, but we'd have to deal with the debt if we keep that around.
If I had the opportunity to start all over again, I would: - Stick to well-known frameworks. Use "boring" tech. - Outsource as much as possible first and don't reinvent the wheel e.g. don't write your own subscriptions/billing, just use Stripe/Braintree/Recurly/Chargebee, use Algolia (don't write your own Elastic) for search, etc. Move fast until you've figured out product/market fit, then optimize for costs, etc. - Stand your ground on rejecting NIH. Devs will complain because they want space to learn, try new tech, do NIH things (I want to hack stuff to!). IMO it's those NIH things that are often said to be "bad code" - they're not "bad", they were just written in a short amount of time to solve immediate problems and they often don't account for all the strange edge cases, etc.
Tech debt is like any other kind of debt: a way to increase leverage.
Some tech debt is like a mortgage. You get significant value, immediately, and can keep the payments manageable.
Some tech debt is like a payday loan. You get ahead by days, but behind by weeks.
Some tech debt is like margin trading. You make an educated bet about the future and if you're right, you've multiplied your success, but if you're wrong you've multiplied your failure.
There's a time and a place for each kind of debt, but taking on debt in a haphazard fashion can get you into a situation where you need to chose between putting an inordinate amount of effort into paying off the "interest", declaring bankruptcy, or risk having the "repo agent" come calling when you least expect it.
(And note that even "tech bankruptcy" isn't necessarily a bad thing, if you can do so in a way that limits the blast radius.)
Most likely you are startup building a software application service that is augmenting or automating or orchestrating some real-world interaction (like an e-commerce shopping or supply chain systems), then you care most about getting your product market fit figured out.
What this means is testing your understanding of the potential customer's needs, selling your product value to those customers (switching them from their existing way of life to your way of life), figuring out the business model (what costs are you optimizing, how much it costs you to run it your way, who will pay for it, can you cross-subsidize something, how does your business scale, at what scale your business becomes viable, at what point do you make profits etc).
This usually requires a lot of experimentation and product iteration. For this, you need to have very high feature developer productivity with very low costs for getting experiments wrong. For the past half decade, this is achieved by not building any IaaS/PaaS stuff in house and using stuff from some public cloud platforms.
Today, a new movement is happening – it is #lesscode or #nocode movement – you use frameworks and rapid application development tools that allow you to write very little or no code to create your applications and iterate quickly with very low software engineering skills. This allows a startup to go very far with very little burn while hunting for product-market fit.
Once you know you have a good product that is on the cusp of scaling, you can revisit your choices and figure out how to optimise costs through in-house software development. The bar is raising every year for what makes sense to build in-house.
I've worked in very small and very large companies, though never owned a startup myself.
A few things I have seen and experienced, personally or from close friends:
* It's ok to not be scalable from day 1 as long as you're not certain who your customer is. Because you are likely to have to shift a lot left right and center and it might slow you down. But do keep in mind that it will become an objective at some point.
* Your code should be reliable and high quality enough that you can refactor it fast and without headaches. I have lived situations where a change in one part of the application was creating bugs somewhere completely different. I've also been in places where tests were forbidden (bugs never come twice at the same spot , RIGHT?!). Not having tests with f* you hard because you wont be able to move without breaking stuff soon, but also because you won't be able to easily expand your team.
* Tangential to 1 and 2, do try to keep abstractions layers in place. That will make your life easier.
* You shouldn't be afraid to let new employees in the code, and to deploy. Otherwise you're a liability.
* Security is a tough one. It'll never be good enough, and it's usually a cost more than a revenue... Make sure that all the data of your customer is safe though, that should be the hard limit. Because if you're successful and get hacked you might never recover from it.
I have seen a brilliant company that had a nice business model go down not because the code was not high quality, but because lack of tests and lack of design abstractions made every step of the way 100 times harder a few years down the line.
You seem to have a pretty good idea where you're going already :).
All in all you wanna move as fast as possible, while making sure that you're not creating the shit of tomorrow. So if you write crap because reason, make sure it's contained :).
* Bootstrapped a startup, left ourselves tons of tech debt
* Glommed as many features onto the core product as possible to meet enterprise needs
* Got a ton of MRR and are the leader in our corner of the industry
* Never pivoted to being a mature company, never paid off the debt. Now the bugs are pretty unmanageable and the software is too complex. It’s hard enough keeping the service afloat, let alone adding new features.
* About 50% of our customers try the software and churn out within six months. Our client industry is only so big, and we’re actively pissing off a huge chunk of it.
* Now we have a PR problem. Industry people leave us bad Google reviews, which our company owners can usually get deleted. They also warn people in industry Facebook groups not to try our product.
If you don’t pay off tech debt eventually, it will catch up with you in lost growth.
41 year old developer here who worked on various projects going from solo to around 50 person teams.
If you want to move fast, you have to hack stuff together. That is exactly what your CEO did.
In the end it all depends on your project. If you make a game, let your users find the bugs. If you make life critical software, you better have some rigurous tests in place. A 1 person project can be really messy, but a 5 person project can't.
Don't put effort into code that might be thrown away.
Most things are an investment, so always question how fast you get the ROI. It's always a balancing act.
But in the end, it always comes down to the same question: is it good enough? If yes, continue. If no, do the investment and move to the next level.
1.) What market are you targeting and what are the overall user expectations for features, quality, reliability, etc.?
2.) What is the minimum viable feature set (i.e., product) to get into that market?
3.) Is it more important to be fast to market or the best to market?
Products are built iteratively. Even if it's OK to deliver on the fast and crappy model you still need a path to fix things incrementally. This applies to just about every product I've ever seen.
b) This really depends on having a combo of a product manager who appreciates technology and an engineering manager or CTO who appreciates business. You have to weigh the benefit of shipping feature X now vs. later, in favor of tech debt T. Both sides need to be honest about the consequences of delaying X or T.
c) Not a YC startup, but always try to write tests and good code. Never abandon it. But in the early days when you're trying to gain traction, don't feel bad about having to compromise on them during crunch times (which is most of the time).
The variable names were random bollywood movie names, there was no class, functions all was hand coded in core PHP and it was too complex to add new codes.
Strictly following MISRA like guidelines while developing web SaaS? Spending day on orchestrating mocks of this and that service so you can test some trivial class and tick the 100% code coverage box?
I think that code quality of individual methods does not matter as much as quality of overall architecture and that requires some design planning and regular refactoring and that I would imagine can hold you back from delivering something in a tight time frame. It does pay in longer term, I am not debating that, but before it pays off it may be too late.
I've worked at a few startups and I'll give you some examples where it matters and where it doesn't.
I was hired as employee #4 at a stealth startup that would turn into a zombie and I was the last non-founder to leave when our runway ran out. At no point in my time there did code quality or test coverage matter even a little bit - our biggest problem was convincing people to pay us, which was particularly difficult because our value to the people we wanted to pay us was intangible (at least to them). This was why the startup failed, we tried to sell to the wrong people for too long (ie, misaligned our values with what our target market actually valued).
Code quality didn't matter because we essentially strung up demo after demo in different contexts, the core technology was basically finished within a few months of founding, and the rest of us worked to put it into different contexts to show people what they could do with it. Those demos would never reach production, and most of them had a single developer. Who cares if there were no tests or it was all spaghetti? We were just trying to show off.
I'm currently an early employee at another startup and spent a lot of time over the last six months developing ci/cd infrastructure and we're going to make a major push for testing/benchmarking coverage in the next month or so. The reason is that we have a tangible and immediate impact to our business because it directly affects our value proposition.
So to answer your question, the answer is it depends. It all matters when it affects the bottom line, because code quality/testing doesn't make you money; it just costs you less money in the future. There is a very definite stage in the life of a startup where that matters, and as a developer in the org you have to budget your time to commit to it when it matters.
a) A really bad codebase like the one you're describing hasn't been the root cause of any failure that I've seen, and I have seen several companies recover from it and become very successful. Unfortunately what you're describing may be a symptom of a different root cause (poor judgment around what matters most to the business), and that can def kill you.
b) These things don't trade off against one another directly. Code quality helps feature velocity. In the early days the only thing that matters is getting product/market fit as that's an event horizon beyond which the future is unknowable; the way to get there is to iterate fast, which does require things like CI/CD and a coherent/non-spaghetti structure (even if the code itself is ugly).
c) I've seen both modes. My main view: what matters most pre-product/market fit is rapid iteration (see above). Once p/m fit has been achieved you need to be able to add features rapidly, which requires a different level of code quality (comprehensive tests, etc). There's no hard and fast rule here, but most products ultimately throw away most of their pre-product/market fit code within 1-2 years of scaling.
I actually recently wrote a blog post that touches on a lot of this here: https://staysaasy.com/engineering/2020/05/25/engineering-at-...
Good luck with whatever you're up to, whether at this company or elsewhere!
Code can be decently sloppy, but there's likely a strong correlation between good code and good startups. Not because the code made them a good startup, but because the good startups are good at most things.
Many startups will do fine with a rough codebase, and obviously you should value the code accordingly (if it's an API as a service, highly, if it's a physical product with no software component, not so much). But be wary of any startup that's close to so bad you worry it might fail. Good founders will rarely let it tip so far to that side of the scale.
Obviously there are loads of exceptions to this rule. But I think if you want to be a founder of a software driven startup or you want to find a great place to work as a software engineer, aim your expectations higher than feels reasonable and you'll probably land at a decent medium.
Engineers look for projects that don't give them headaches while working on them, and those that help them learn the right things. At the same time, they do look for maintainable and extensible codebases that they can enjoy working on.
The problem with bad code that is a clustermess is that things leak everywhere, and fixing one bug will lead to another. You won't have a product that is stable for your users either. At one point your engineers will make a point of rewriting things from scratch, but management may stop them. This will force them to quit ultimately so you'll have to deal with loss of resources.
On the other hand, using your users for QA is terrible. They do not report bugs at all, they get frustrated and spread the bad word. If they're paying users they will start looking for alternatives at one point.
This is all a part of your business.
As for some of these questions:
a) Yes I've seen a few companies go under because their code wasn't able to generate profits. A couple of times it's been so bad that customers didn't get what they needed immediately as a result. But usually the sorts of company failure modes from bad code are less dramatic. Sometimes this is like bad debt in that it looks good initially but comes at an existential cost later. Other times it's been more boring like lower velocity making the company uncompetative or too expensive to run.
b) If you are thinking of trading features vs code quality you've already lost because this isn't something that can be traded.
c) Writing some tests tends to be a pareto-optimal choice, in the sense that lower defect counts tend to allow you to create more economic value from the limited software development staff you have in a given time frame. Frequently you'll find that having some tests allows you to deliver things like features more efficiently than you would without them. High defect counts tend to result in not meeting requirements or unnecessary rework. There's a sweet spot here about tests and test coverage, there's definitely diminishing returns and getting to 100% coverage is very expensive because of the last few percent being disproportionately hard to get while not being worth the cost of getting it in many cases.
Sometimes you didn't understand how the feature should be built until it was done. Sometimes you need to live with a suboptimal architecture until it clicks in your mind. Sometimes I read my own code and realize "this is bullshit". It might need some time to rest.
But refactoring is easiest when you just worked on a feature and everything is still completely in your mind, "striking when the iron is hot" as I call it. What you can refactor in minutes after you checked off all of the requirements of a feature can cost hours if you don't have a complete mental model anymore if you revisit months later.
I have some maybe non-obvious thoughts though - some useful questions to ask
1. "how difficult is this bad code going to be to cleanup later?".
For the vast majority of issues, its usually not very difficult to clean up later. Only very few things like e.g an API that many customers use, or the way core data is modeled/accessed are difficult to change later.
2. "how well encapsulated is the badness in this code?"
A shitty function, or a janky microservice with a well thought out API is much better than a sprawling mess. The more you can split your architecture into independent pieces, the less bad code in any one piece matters, and the easier it is to reason about. Horrible code has no clear separation into layers and everything feels like one giant tangle - that genuinely slows down dev speed and makes building stuff feel risky.
Good engineers often write code thats bad but but also encapsulated well enough to change easily.
3. "what are the business consequences if this code fails?"
Code quality on a feature not used by many people matters far less than a core feature. Database code should be more stable than web tier code. Code touching the core of a web server should be reviewed more carefully because it may cause downtime. A bug on a peripheral feature can often be fixed later without much impact to customers.
4. "how quickly and confidently can the people responsible for this code change it?"
Super spaghetti code is hard to change for everyone. In contrast, some code has some historical design baggage or intricate business logic which may be simple enough for experienced devs to change, even if it is hard for newcomers to understand.
You could have 100s of tests but server could easily fall over. So it’s always a trade off.
One thing I can say is if you sow the seeds early, it’s easier to add a test with a new feature than add a 100 tests to a 2 year old feature that no one understands and keeps on falling over.
Some companies take this to extreme on both ends. Either no tests at all or everything needs 100% coverage delaying time to get things in the hands of customers.
Most pragmatic places I have worked at invest in test infra once they have good product market fit. Make it easy to write tests and fast to run and debug them. If it’s easy to do the right thing, why not do it ?
Yes, but it's more that the programmers are bad instead of the code. Bad code can be patched fast by a good programmer, but becomes rapidly unmaintainable by a bad programmer. A lot of techniques and style guides out there are designed to manage bad programmers.
> b) what is the best calculation to make when trading off code quality vs features?
I have two modes: prototype and production. Prototypes are disposable, and value speed/results above all else. They should be thrown away after. Treat them as a demo to get budget for a feature or a hack to solve a problem right now. Design it to be completely destroyed and replaced, instead of replaced gradually, although you can probably reuse interfaces/contracts in between these modules.
Production code is kept clean and as maintainable as possible, but keep the engineering to a minimum. If you have to ask whether something is overengineering, it probably is.
> c) do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?
I'm not sure about YC but I don't write automated tests. I have a text file with all the manual tests I need to run. Features are usually scrapped hard in a startup. IMO it's better to release a broken thing to 1000 people who complain it's broken than to release a well built thing to 100 people who think it's nice but won't pay for it.
> Should we just be chucking shit at the wall and seeing what sticks? Do most startups bin v1 and jump straight to v2 once they have traction?
Rule of thumb is you need dozens, if not hundreds of prototypes, so optimize for speed and experimentation quality. You're like a prospector, looking for ore. You don't want to build an entire mine, where there is none, and you don't want to commit too hard until you know there's a sufficient number of it.
But things are different for "ramen profitable" startups, and you should start looking into how to maintain better and add features faster.
I bypass this rule when I think it's obvious I'm going to want automated testing. For example, I needed a customer facing DSL for importing data with a lexer/parser/interpreter; manual testing was bypassed from the start.
It’s more about how much abstraction is built into the system. A mature codebase has a clear purpose and therefore can contain durable, high level, even beautiful abstractions. On the other hand, a founder doesn’t always (nor should they) know what their code will need to do in 6 months time, so they typically avoid writing abstractions.
You can still write good code as a founder—it’s just that good founder code looks different than good BigCo code.
And this isn't just startups or side issues. I don't know anyone who has looked seriously at OpenSSL without being completely horrified.
For starters, the project _must_ be done in a completely serverless manner (AWS was the chosen provider) and _nobody_ in the team had experience making a complete product just using this kind of architecture.
Since performance is the main concern, at the beginning we did a very shallow research on our options for languages and relevant items to the lambda's performance. One of those was cold startup time, which the bundle size has influence in. This led us to split our custom dependencies as much as we could, making the development and testing more painful.
With both previous points presented, I can say our code quality is not good. As for velocity and delivering on time, we have had some issues because of planning mistakes and unforeseen inconveniences while using AWS SAM and AWS CF. Nonetheless, we're "on time".
We have identified some pains that we would like to fix post-launch but that moment seems to never going to happen. I got a feeling we won't have time to do maintenance on the product and we'll just be bombarded with either bugs or new features.
As others have said before, customers will only look at the app's functionality and UX. And in our case the application looks amazing. The backend, not so much.
> a) has Hacker News/YC ever seen a startup fail because the codebase is so bad.
No, but I have seen the mass velocity hits from short term decisions living on over the years. Tech Debt is real and can eat into 20 - 60% of a teams output because of bugs/issues/lack of documentation & context. These places are miserable to work at.
> b) what is the best calculation to make when trading off code quality vs features?
Unfortunately this may not be a popular opinion but here is what has worked best for me. You need a sound ARCHITECTURAL base from inception, to do this the person who makes the decisions or is in charge needs to use tools/languages/etc that they are experienced with to develop a clean base to work from. Its not hard to set up CI/CD, unit testing, proper devops, and code decisions like inversion of control, and proper service segregation from the outset IF you use technologies you are strong in. This lets you move quickly if need be but the "bad" code is limited to services/systems. Its easy to fix a single poorly coded rushed class/function/file. Its a nightmare if your entire basis you build off of is crap.
Startups tend to be limited on time... and sadly often startups hire inexperienced people who cant do the above or experienced people who focus more on shiny new technologies then using things that work and and be quickly executed.
c) do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?
Never been part of a YC startup, but I would say my general experience is that when your still figuring out what your product/market fit is things like scale/code quality/architecture shouldn't matter... however two things need to be kept in mind. The first is having an "escape hatch"... this code is crap we all know it but its the code we need right now, is their a way we could pivot/transition to a new system/architecture in a few weeks when we finally get funded or "grow"/"scale". The second is identifying that pivot point and investing time to create the the first generation foundation (if you go full unicorn/scale again you may need to deal with this yet again).
In conclusion you need to do what gives you the most velocity for your effort, this means when you are super small and still figuring out the basics a costly foundation inst worth much. Then if you survive and shift into growth mode you need to expend some effort/rescourses into a good base to keep that velocity alive.
When I started coding I was disciplined and organized, writing tests, etc. And as time has passed I've had to sacrifice those guiding principles. At some point changes to UX and logic to provide a better user experience has taken higher priority to well tested code. I've changed and modified things so frequently that the tests I wrote would break. There are tests I wrote for code that is no longer in my code base. It felt like a complete waste.
If you have a clear vision of your MVP, or you have a designer giving you requirements and wireframes, or you know exactly what you want when you start out early on (waterfall?) maybe you can stay true to all these well established and proven software development principles.
But if you are flying by the seat of your pants and figuring out as you write code I'm not so sure doing all the right things should be your first priority. I feel that building you MDP - minimum DELIGHTFUL product - may be more important than building the MVP. And that might produce substandard code.
It could also be that I am a terrible developer and product manager and designer and entrepreneur.
If you are at all curious what the hell I'm doing, you can see my landing page - https://www.keenforms.com - its a form builder with rules
b) The best metric is what is most boring and what is most comfortable. Boring tech is good. Boring code is good. Things are more or less defined by their failures than successes with languages. You want to be defined by what doesn't happen in your code because you made cogent decisions.
c) Do most YC write good code? Yeah but that's not what defines their success. Clean code is presentable. Clean code sets a tone. Tests are sometimes snake oil, sometimes valuable. It's hard to assess how valuable a metric is once you become invested in increasing it. No, writing tests won't save you. But decent DevOps will hopefully reduce cognitive load in managing features. Writing unit tests is in my opinion, a nice reprieve in between coding sessions. I look at it as paid downtime.
d) As someone pointed out, there is survivorship bias to consider. It's pretty common for v1 to a complete disaster where nobody knows what they are doing. Most fail and do not attempt v2. Eventually the to-do's and somedays just pile up and you lose to a competitor.
e) Another perspective, almost everyone's code will be some kind of dumpster fire. You'll realize perfect pipelines will always be desirable, as in nobody has one. The only code that is 'bad' is the code you fail to take accountability for.
The biggest issue is not knowing your problems. If you are aware of your technical debt it means that probably you have a plan, or at least an idea of where to look when shit hits the fan. Otherwise people would run like crazy, deny the problems, miss deadlines and customers expectations and ultimately fail.
The decision between them is simply “once this is shipped, would you accept having to completely re-write it from scratch to add even the smallest feature?”
If the answer is no, decent tests will make you go faster. Your commit volume by SLOC will be roughly:
1. Refactoring (~50%) 2. Tests (~35%) 3. Actual impl code for features (~15%)
That is, you’ll transact more than three times as many lines of code with your VCS repo just re-writing impl code smaller and cleaner and better organised than you will actually writing code to build the functionality.
You’ll spend more than double the adds/deletes/changes to lines of test code than adding features to the product.
You’ll implement new features at roughly the same speed today as tomorrow as next year. You can drip feed more devs into the team every 4 months or so to build out velocity further.
If you’re willing to throw it away after the first release, you’d be silly not to ditch the tests, forget the architecture and just crank out something that works - best done by a solo dev, deploy each dev in a solo fiefdom from the beginning if you want to throw more devs at the problem.
In practice, almost all code is written as a mix between these two views and is slower and more expensive than either approach above because of it.
And that's exactly what good startups do, they do business tests before they write any code. Don't write code at all unless it's providing value to the customers or helping you learn something you need to know to provide value to the customers.
Now that this is taken care of, we come to the problem of the code itself. Each bit of structure you add, whether it's a line of code or a database field on a table, is a bit of infrastructure you may have to maintain, possibly forever.
Some folks want to take their eye off the business tests and move directly to system tests, testing and then coding to make sure everybody can easily understand and maintain any code that's written.
Most startups fail because they never ever got the business tests working right. They either never got around to creating them and making them pass or they came up with something that worked but were unable to flywheel it or lost the plot somewhere. Some startups have almost-perfect code that nobody wants; that's actually one of the most common way of failing.
So the natural state of affairs is to always be experiencing some kind of stress between value discovery and code quality. Personally I believe you solve a lot of this by changing the way you code and the way you look at coding, but there's too much to go into here. The key thing for most programmers to remember is that if you're dying of thirst in a desert, you're not going to care very much if the guy selling glasses of water has glasses that leak or water that's muddy. The value proposition always comes before anything else.
In my experience you should not treat all parts alike, the more foundational the more time you should dedicate.
It's important to think the db schema properly, anything else will cripple your development, and the longer in the run the harder it will be to fix it. You don't want to sanitize wrong data two years into business.
If there's a library, it's better to spend time on thinking the proper API, the code can be later be improved.
It's ok to have garbage as long as it can be isolated and you can keep on going. For example, we had configuration files that had to be synced with the db. That could have been automated, but it was ok to hardcode them in config files. It was not ok to hardcode them across the whole code. First one could be turned clean in the future, second one would've been a mess.
Invest in tests, specially setting up the process. At the beginning they can be just smoke tests (this API returns success), as the start up grows you'll have more options to add proper tests.
Yes, I've been seen this happening on some projects that I joined. The most hilarious story I've is when a funded startup spent 12 months with a team of 5 and the app would crash with a single user with little usage. I managed to rewrite a functional prototype/v3 in 3 months that worked much better. Others were so costly to refactor and got shutdown.
More often that not, it is all about the specification and company culture that creates this chaotic outcome.
b) what is the best calculation to make when trading off code quality vs features?
This one, personally, I like to put the responsability on the dev team. Not having an exact spec is far from ideal but the dev team should work with the business team to create a good enough first version. If the code is garbage, you've to question the development team, period. If you take nano refactors (around 20 minutes) every day before you push your code and follow the community guidelines for the stack you're using, technical debt won't become a problem in the first stage.
When you're asking this question you need to ask: Why has the dev team wrote code that lead to this situation? Do we have PR reviews? Coding conventions?
c) do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?
I don't know about YC startups but I can tell you that I'm yet to know a company that has, at least, 50% code coverage. Any time I mentioned writing tests, the other side looked at it as an unnecessary expense. Personally, I believe it is up to the dev team to identify key code components and write the tests. If you've a function that keeps breaking all the time, that is a great candidate for unit testing.
It is possible to write clean code on V1, this is what I do today. I've faced so many situation where I didn't and it ended up always costing me more time and working too many hours. I would rather to delay the release of v1 and having something stable than trying to please the business team at all costs.
Believe it or not, the business team don't give a f* about your codebase. Many times I reported security vulnerabilities and they thought I was creating problems lol. I've seen devs not reporting bugs because of the company culture.
As a developer, do your best and always keep learning and growing. If you do this, you'll produce better codebases naturally.
Negotiating with the business team is also key to have a sucessful release.
There’s gonna be a big survivor bias here. You won’t hear about most of the startups that collapsed because the product just didn’t work.
b) Keep the bugs non fatal, make sure the features are worth it.
c) I’m not in YC, but yes. There’s a really good reason why tons of startups duct tape shit together with node and ruby, only to rewrite it later in something else.
Quality+Speed+Efficiency cowboy coding |0-----1------2-------3------4----5| perfect iphone
I would never expect a startup to be operating above 4 or 4.5, it might mean you are spending too much time future proofing.
The best teams operate around 3 or above, but they can do so because they are experienced, disciplined, trust eachother, have a set of tools they know very well, and can move at a quick pace because they automated a lot, have code patterns they follow and are not "re-inventing the wheel" or trying new frameworks for fun.
A LOT of startups are being started by inexperienced developers, where they jump onto some new language or framework, and end up doing a lot of non-core work due to inexperience and due to choosing some nascent framework. This immediately puts them at less than 3, probably between 1-2.
If you are at a 2, i would say you are doing OKAY, any less than that, and I would say you probably are suffering from inexperience, bad choice of frameworks, no tests, etc.
Yes. Velocity slows, features don't get out, new versions don't get released, investors don't see product progress, funding runs out.
> b) what is the best calculation to make when trading off code quality vs features?
Wrong question. Avoid code. Avoid implementing things at all, use other people's APIs, fake features with manual scripts that you eventually automate.
> c) do most YC startups write tests and try to write cleanish code in V1 or does none of this matter?
Yes. expect(result).toEqual("hello world"). You don't have to do TDD if you don't want to, but it's not fucking hard to record the output once, save it, make a test and then know what you broke later. Don't be lazy.
b) system sunset date is the tienbreaker, its hard to justify shit code for a space probe, and its hard to justify perfect code for an email collector
c) automated tests are a development tool, theyre not there to make sure your code works, theyre used to ensure your code is sufficiently decoupled, modular, maintainable, and easily scalable in the future. Theyre also frequently used to spike problems that are otherwise hard to solve.
The level of importance of tests you place on your situation is super dependent on your devs. Some types of project i wouldnt write tests for. Others i do. It depends on scope and experience.
d) yes, maybe
I've also seen a good lead come in and rescue the direction of the code. That requires expertise in the language, a good understanding of how to rescue legacy code and political power within the organisation.
If you have to work with bad code make sure you find ways to enjoy work. Also don't allow yourself to think you're a bad Dev because you can't work fast. It's the code not you. If no one will allow you to get tests in place and fix it, it's not your fault.
I guess in the end I would still go as fast and make as many mistakes but I tried to encourage them the have clear boundaries around the components so that really bad stuff can be rewritten. I guess they’ll probably be a huge success and that’s the only thing that matters really!
I would say writing the code "with care" simply depends on the initial team. There are plenty of startups that take the extra 20% or so time to build it right and with care that are successful.
I worked at one company with a wonderful code base for seven years. They're about to hit 100m ARR. We wrote tests, mostly used Java, and cared about building reusable components and a platform. I would say hitting 100m ARR in that timespan is good.
But as soon as the startup is earning money and winning customers, a rewrite with better code quality standards must be planned. Unfortunately, maintaining high quality standards means also investing tons of time in setting the tools and the development environment and sometimes it is pretty hard (especially when dealing with IDEs like IntelliJ and you want to use your own Checkstyle)
In my opinion it's:
code quality = "How long are we going to need this feature" * "How much money are we getting from people who use this" * "Cognitive load added by the feature to the whole project"
V1 projects don't bring big profits, can be shut down any time and their codebase is relatively simple. I'd keep the code quality low until some of these factors begin to change.
Great code and no users is a code that never going to run.
b) Just take into account these: Will feature introduce major bugs like database corruption, or will it just cause minor bugs like UI bugs? Also consider if this feature is really necessary for MVP and will have a considerable financial return or not. In my startup I definitely don’t deliberately write bad code, but there is a limited time/financial funds, so it is ok for some of the code to be hacky(though never horrendous), I just put a TODO there to remind myself to fix after product release.
c) I favour an agile approach, try to implement the feature first without unit tests and see how it works with the overall architecture. I only unit test code: That can cause major bugs, or code that involves heavy math.
b. It's all about extracting the max value out of your dev time. Will refactoring / improving code quality mean that future features get delivered quicker?
c. Most POC don't have tests from my experience. They are usually added later.
- the most expressive languages might not be the most readable. This is because a language that can match YOUR way of thinking and MY way of thinking might not lead to you being able to read MY code.
one example is Perl - where you can say:
if foo { bar;}
bar if foo;
bar unless !foo;
etc...
the takeaway here is: the most efficient way to get an idea out of my head and into code, might be person-specific and hard to maintain.- Working code can lead to survival. Only survival can lead to the time to do it "right"
The more specific an outcome you're looking for the more factors you'll have to consider. You can loosely think of the relationship between code and companies like the code is "the matrix" and the tangible business world is "the real world". It might help to think of the code as a child being raised in the matrix.
The first product market fit stage is the hardest. Here it befits the code to be maximally extensible such that you can most effectively steer it around the market landscape and most effectively capitalize on any discoveries made. But you also need it to work decently enough to have traction. This stage is like parenting a baby that needs to decide its life mission and begin it during the first few years of its life. Its main purpose is self-discovery, but also it needs to be set up to become whatever it discovers it wants to be. Here, luck is the main name of the game.
The next stage is growth (farming the land you've staked, becoming the thing you've decided your codebase baby's life is about). Here you need less extensibility and more fidelity. You're clear on what your code needs to do, and you just need to make sure you do it well enough to last long term. But also things get more complicated at the org level. Now a team has to be built out. The codebase must now mature, and that means that it must gain a firmer grasp on its purpose (high fidelity architecture and infrastructure) and learn to interface with the world (be geared towards long-term maintainability).
After you exit that stage, you exit the startup stage entirely. Generally, if you're a businessperson and it's available to you, having good engineers (human communication skills above technical skills, understand the holistic function of engineering within the context of the rest of the company) is the best solve to this problem. They will have the vision to assess the field and the communication skills to inform you about it.
You will feel the urge to carve the unpredictability of the outcome down with measurements, metrics, and calculations but this is mostly a fool's errand. If you're doing something brand new there is no defined path and it is about pathfinding, not measuring your performance along a path. There are a ton of resources that all give opinion on the best way through this beginning patch of woods, but the true reality is that at the end of the day, getting through woods that no one has ever gotten through is something that can only be mapped in hindsight.
I've seen acquisitions fail over code quality.
in keeping with the analogy, every business has a different appetite for debt. the debt to equity ratio of your current position should keep you able to take on debt when you need to. the debt should never get so great that it cannot be paid down. being without debt is holding a position that doesn’t leverage your ability to take debt.
Post product/market fit, care about it deeply and enforce strictly.
My understanding is that a major reason they failed was poor code and an inability to maintain performance.
There not at odds.
Be good enough to not fail an ethics test re customer data.
Everything else is sales.
2. Fix up your code.
that's a false choice. in reality you can have all three
'get things done pretty fast' is the only red flag in your story -- if you want your life to be truly worthwhile you must make this codebase unproductive as well
1) Code doesn't really matter as long as it solves the issue you are trying to solve. Don't expect your code to be beautiful from day 1. Be responsible and train your devs to be responsible as well, because in a startup you code, fix and deploy your own stuff. What does matter, though, is code complexity. Manage your complexity, don't overcomplicate things if you don't need to. No need to design a Ferrari when all you need is a horse and carriage.
2) Process matters. From day 1, make code reviews/pull requests the default. If you are the most senior dev, or a technical founder/CTO in a small startup be prepared to spend about 50% of your time reviewing code and helping others. You won't get to code as much, but you'll sleep better at night knowing at least you've tried to catch some bugs before they reach production. In an early stage startup, you will not have the time nor the resources to test everything, but this will give you peace of mind.
3) Tests matter. That being said, in the beginning only test mission critical stuff. If you find a critical bug, fix it and then write a test for it. If a new feature breaks something that already works it is a big no-no and might lose you customers. Testing will change for you as you progress with your startup. Start by making the process easy for the devs to run the tests locally. Then, progress in having CI. Then, maybe have CD as well.
4) Worst case scenario: full rewrite. If a 6 to 12 months old startup decides on a full rewrite. I'll give them the benefit of the doubt, maybe their whole use-case has changed, maybe they DO need a rewrite. That's fine. But, if you are a SaaS that is older than that and your dev team is around 10 devs and they are all busy solving critical bugs and putting out fires, a rewrite might mean your death.
5) Architecture matters. This matters more than code, in my opinion. Say you have a horrible piece of mission critical code, it is SLOW and begins affecting your business. That piece of code will need a rewrite, for sure. But what would you rather do: spend 30 days to fix it and lose customers, or just spin up another machine/add CPU/add RAM? This is a good architecture, it allows you to have time to think things through, allows your code to run well and perhaps most importantly, allows your developers to actually code.
Bad architecture is the leading cause for rewrites. Is that beautiful microservice architecture giving your small team headaches? Did you overcomplicate things, perhaps? You see, bad architecture is very hard to fix. People seem to underestimate how much a simple API + DB can scale and try to mitigate the risks by copying whatever FAANG does. Start small, scale later once you have the resources to do so.
TL/DR: Code quality doesn't matter if you solve your issue. What matters more is mitigating the risks that come with writing code in general. See above for some ideas that came from my own personal experience.