There was no testing and no documentation of the feature sets involved. Rewrote the code in under 10k LOC, huge cost and performance improvements were had, and the feature set was documented. Other engineers could look at the code and maintain it. Got the critical sections of the expert system to 100% branch coverage. Had to fight against other engineers who were saying "100% coverage is bad! I read that on medium!" I just did it anyway due to my own ethical concerns of writing bad software in this area and it only took me ~4hr more to get from 80% to 100% critical path code coverage.
Some of the most stressful software I worked on. I added in shit loads of logging to make sure that if anything did go wrong we'd find out what happened.
At some point someone wasn't notified about something the expert system was supposed to catch. I felt like I was going to faint because this meant someone was seriously hurt. Unfortunate twist: my code worked correctly. The system on the other end who was supposed to send the alert (maintained by the "100% coverage is bad" guy) died. Very unfortunate. The other engineer didn't feel bad about it at all.
I think about this a lot when I'm speaking to engineers. I use it as a litmus test for who I really enjoy working with. If they read a medium article and can't apply critical thinking to identify if it relates to their circumstances or not. Also, attempting to find out if they refuse to hear out a coworker.
Probably the cruftiest code was a ship autopilots UI code that was 20k lines of C in one file with no automated testing. There was a very manual process for loading code onto a test display. The debugging experience was terrible. There was a way to manually inspect memory addresses, but that was about it. Oh and there were threads, managed by the homegrown OS (basically just a scheduler that ran on a 10 ms ticks or on an interrupt). These threads had priorities so they could a sometimes get into deadlocks waiting for each other.
As insane as it was, it was the best learning experience I could have had. I’m glad I stuck with it. I learned a lot about low level systems, prioritizing the highest value refactoring, and introducing automated testing to a legacy code base.
When we expanded into South America we bought a Brazilian company. I guess it didn't work out because within 2 years the company decided to shut down the Brazilian office and (unfortunately) all of the Brazilian-based developers were let go.
The website was handed over to us, and we were tasked with updating it to reflect the office was shutting down. It was written in a PHP framework I've never heard of, the dev and the prod websites were running on the same server, and it was obvious development was completed via SSH on the box itself. No git repo or pipeline or testing framework or anything like that.
We tried saving it but after 2-3 days we gave up and converted over to a static site hosted via S3.
Ran out as soon as I could. In general it's worth remembering that if your overall team isn't performing well you still can be fired regardless of what your immediate manager thinks.
For better or worse I had a childhood which taught me not everyone's going to like me, but you just need to find people who do. In the case of employers I've left several places where I just didn't like the culture. It's worked extremely well for me.
for example, a function name called:
extract_metadata
or run_procedure
would be evaluated, the system would go through the python APIs and get the name of the function and eval each part splitting on the `_` against a set of meta-functions.The system was big, 10s of modules, 100k`s of LoC, many many many functions. Nobody understood it or how to make a change to it. Simply renaming a function meant that the execution path in the program was different (since function names were part of the API at runtime !).
Since the output of this system was predictable (a JSON file) based in the input (~30 parameters), I ended up running integration tests of expected JSON output vs input in the system and then I rewrote the system having the integration tests passing!
The final system I built is still in the company which I left ~4 years ago used by thousands of people! Certainly proud of fixing that mess.
- some manager picks the cheapest contractor
- said contractor ships enough code to tick the check-boxes on the contract
- people who will actually use the project will not get their hands on it until its too late
- project is fundamentally broken
- management: * surprised Pikachu meme *
- management: you will need to fix this ASAP and throwing blame around is childish, so don't bother
Then there's negligence like interns throwing code at a wall, not caring about quality, and managers not doing code reviews. Oh, and it's code shared across several product lines in hundreds of products. A+ guys for letting me clean-up your "science experiment" dishes. You got your giant wall of dead code, your commented-out code, debugging instrumentation code, unfinished code, code that duplicates code elsewhere, and code no one understands because they didn't document it.
Had to swap FTP out with rsync.
Of course, that entailed numerous hours of master network engineer skills of tracking down data loss across several hops.
Weird how we trust the hardware buffer to keep the data intact prior to its packet hardware checksumming.
Meta programming everywhere, debugging what you expect to be a basic bug ends up taking hours.
They even managed to rewrite a bunch of the active record code so that the models talk to an API our big app instead of the DB. As there’s no good reason why, all it does is confuse you every time you look at it. And even when you figure out what they’ve done, you waste even more time staring blankly and asking “why…”
Fucking awesome, loved every second of it. You won’t learn this stuff in school.
hekps out a friend with some coding problems; discovers a file containing a class called "superHoegen" with the function members "SuperHoegen1" ..."SuperHoegen12" took a while to realize that this was the martix class of the renderer and sth like ".SuperHoegen5()" would invert the matrix.
also there where no bugs in the superhoegen class, just funny naming :)
It was … I don’t know how many … lines of COBOL-66, and part of the job we had in late 1989 was to bring it into the modern future of COBOL-77. My TOP SECRET/SCI clearance wasn’t complete yet, so I got tasked with helping to rewrite this CONFIDENTIAL bugger.
Turned out that the only thing classified about it was the code that handled the great circle calculations for determining distance from one airfield to another. I recall that one of the first things we did was to move that one routine to a separate printout, so that would have to stay locked up in the safe unless we were doing something with it, but all the rest of the code was unclassified and could be kept on our desks.
After that, about halfway through the conversion, it dawned on me that the code in this system was split into two parts — one huge monster routine that literally did everything that the code could do, and another copy where you could pick and choose what things you wanted. It occurred to me that we could cut out half of the code of the system by using just one set of routines for both functions, and if someone wanted to run a “kitchen sink” report, then we could just fill in the fields behind the scenes as if they had actually manually selected every option that was possible. No one would ever know the difference, and we wouldn’t have to maintain two copies of every single routine.
That one change cut the code base down by about 40% in size. Thousands and thousands of lines of code, at least.
We did later discover that there had been some minor cosmetic tweaks put into the system, so that if you had the code set up to generate properly formatted output on the READINESS system, when you took that exact same code and ran it on the production NMCC system it would then result in an extra blank page being printed between every page with text on it. But if you set it up to properly format on the production NMCC system, it would be a line short on the READINESS system and output would slowly get out of kilter and wouldn’t match up with the natural page-breaks of the fan-fold paper.
The older version of the code ran perfectly on both sides. There was something buried deep in the old code that we were never able to figure out what was going on so that we could properly replicate it.
So, yeah — I don’t officially admit to having ever programmed in COBOL, but I did. And this is probably one of the best programming achievements I ever had in my life as a person who does not write code.
Suppose the line count of "terrible" code re-written was t and the line count of "great/clean/elegant" code written for bankrupt companies was g. What is the median value of t/g experienced by hn readers?