HACKER Q&A
📣 manx

Are you working on a big software project? Happy with the architecture?


Please share experience and insights about architectural decisions in bigger software projects.


  👤 austin-cheney Accepted Answer ✓
I started a JS based file sharing application a few years back. It started as a thought experiment of just exposing the file system to the browser in a familiar OS kind of user interface. As new features are added over time it has become more like a high level OS.

https://github.com/prettydiff/share-file-systems

Some architectural decisions I made:

* Micro-service based

* I am now using WebSockets for all services and communication. That has proven in the application to be 7x faster than HTTP.

* I have a universal format wrapping all service messaging, kind of like sending a letter in an envelope. This allows me to using a single service end point for all services and a single means of service monitoring.

* I did not like the existing test automation solutions based upon CDP, because they are too slow and fragile. Also, they do not provide support for a peer-to-peer experience. So I wrote my own test automation solution for testing in the browser and its much faster and predictable.

* I am using an identity based authentication mechanism to restrict access to known users/devices.

* I just write to the file system instead of using a database for data storage. This allows for much faster application start up times and lowers complexity. The performance difference is insignificant after accounting for that in most cases opening a file is more costly than arbitrarily writing to the file system.

* I figured out how to install certificates using automation in both Windows and Linux which allows me to run the application using encrypted transmission protocols (https/wss) on localhost.


👤 magicalhippo
Not sure what you classify as big, but we're a relatively small team (grown to 7 devs now) working on a CRUD-ish B2B desktop application, mostly installed on-prem. We have a few hundred customers, well over half have some custom modules or integrations. Our niche involves sending and receiving data to official systems.

Between our customers and the gov't, we constantly have to deal with a lot of external systems that change. With our customers we're often notified when stuff breaks, as very few of our customers have complete overview of their dependencies.

So, being a small team with a relatively large codebase in constant flux and with customers who are quite price sensitive (so can't afford huge QA team), one of the better architectural decisions we made was to accept that we will have bugs and instead focus on reducing impact.

Application-wise this means ensuring DB schema upgrades are done such that older versions of our application can use an upgraded DB without issue, and have a way for users to easily launch older versions of our application. Application updates are distributed frequently, and DB upgrades happen automatically over night.

Few customers have a separate test systems, so this also allows them to test new major versions in production before rolling out to all users.

Another choice that's worked out well is to not adhere to DRY when it comes to the integrations. Often customer B needs an integration that's the same as what customer A has, but with a few tweaks. For example they might both use the same ERP systems, but have slightly different configuration or workflow.

In those cases we've found it's almost always much better to copy A's code and modify it, rather than refactoring into a common base. Yes we might have to fix a bug multiple times, but it also means when changing A's integration due to a change request, we don't have to worry about messing things up for customer B.