Our startup built a plugin for Microsoft Outlook. It was successful, and customers wanted the same thing but for Outlook Express. Unfortunately, OE had no plugin architecture. But Windows has Windows hooks and DLL injection. So we were able to build a macro-like system that clicked here and dragged there and did what we needed it to. The only problem was that you could see all the actions happening on the screen. It worked perfectly, but the flickering looked awful.
At lunch, someone joked that we just had to convince OE users not to look at the screen while our product did its thing. We all laughed, then paused. We looked around at each other and said "no, that can't work."
That afternoon someone coded up a routine to screenshot the entire desktop, display the screenshot full-screen, do our GUI manipulations, wait for the event loop to drain so that we knew OE had updated, and then kill the full-screen overlay. Since the overlay was a screenshot of the screen, it shouldn't have been noticable.
It totally worked. The flickering was gone. We shipped the OE version with the overlay hiding the GUI updates. Users loved the product.
You can find its ancient source code from 1999 here: https://github.com/ssg/sozluk-cgi
The platform is currently at https://eksisozluk1999.com because its canonincal domain (https://eksisozluk.com) got banned. Any visitors from outside Turkey should get redirected anyway.
Since it's still a legal business entity in Turkey, it keeps paying taxes to Turkish government, and even honors content removal requests despite being banned. Its appeals for the bans are in the hands of The Consitutional Court to be reviewed for almost a year now.
A newspiece from when it was banned the first time this year: https://www.theguardian.com/world/2023/mar/01/eksi-sozluk-wh...
Its Wikipedia page: https://en.wikipedia.org/wiki/Ek%C5%9Fi_S%C3%B6zl%C3%BCk
So I experimented with screen hotkey tools. I knew about QuicKeys, but its logic and flow control at the time was somewhat limited. Enter I wrote and debugged a tool that:
This had to work for several thousand employees over
a few weeks. I had 4 headless pizza box Macs in my office running this code. Several things could go wrong, since all the code was just assuming that the UI would be the same every time. So while in the "waiting" state I had the Macs "beep" once per minute, and each had a custom beep sound, which was just me saying "one" "two" "three" and "four". So my office had my voice going off an average of once every fifteen seconds for several weeks.
1. Listened to its own email box: cole.exporter@hbo.com
2. You emailed it your password (security? what security?)
3. Seeing such an email, it logged out of its own email and logged in to yours.
4. Then it opened your address book and copied out entries one by one.
5. It couldn't tell by any other method that it had reached the end of your address book, so if it saw the same contact several times in a row it would stop.
6. Then it formatted your address book into a CSV for importing to Exchange, and emailed it back to you.
7. It logged out of your account, and back into its own, and resumed waiting for an incoming email.
Screen readers were reading form controls, but no matter what I did they weren't activating any of their web-specific features in Chrome. I spent weeks carefully comparing every single API between Firefox and Chrome, making the tiniest changes until they produced identical results - but still no luck.
Finally, out of ideas, I thought to build Chrome but rename the executable to firefox.exe before running it. Surely, I thought, they hadn't hard-coded the executable names of browsers.
But of course they had. Suddenly all of these features started working.
Now that I knew what to ask for, I reached out and made contact with the screen reader vendor and asked them to treat Chrome as a web browser. I learned another important lesson, that I probably shouldn't have waited so long to reach out and ask for help. It ended up being a long journey to make Chrome work well with screen readers, but I'm happy with the end result.
I needed cache eviction logic as there was only 1 MB of RAM available to the indexer, and most of that was used by the library that parsed the input format. The initial version of that logic cleared the entire cache when it hit a certain number of entries, just as a placeholder. When I got around to adding some LRU eviction logic, it became faster on our desktop simulator, but far slower on the embedded device (slower than with no word cache at all). I tried several different "smart" eviction strategies. All of them were faster on the desktop and slower on the device. The disconnect came down to CPU cache (not word cache) size / strategy differences between the desktop and mobile CPUs - that was fun to diagnose!
We ended up shipping the "dumb" eviction logic because it was so much faster in practice. The eviction function was only two lines of code plus a large comment explaining all the above and saying something to the effect of "yes, this looks dumb, but test speed on the target device when making it smarter."
The ASCII 'screenshots' came out beautifully. From then on when a call came it, we told them to use the view log menu item, scroll to the trade time, then they'd shut up quick. A picture is worth a 1000 words indeed.
Everything would be ok if we could just delete the table, but that would involve opening a session and that wouldn’t be possible because of the immediate crashing.
On a whim I thought, “well what if I just have a really short amount of time to connect before it reboots?” So I opened up 5 terminal windows and executed “while true; do mysql -e ‘drop table xyz;’ done” in all of them
After about 10 minutes one of the hundreds of thousands of attempts to connect to this constantly rebooting database succeeded and I was able to go home at a reasonable time.
The CD drive in my first computer broke. We couldn't afford to get a new one, and after almost a year of using floppies I got a bit tired of having to carry them across the mountains every time I wanted play a new game. (context: I lived in a small village in southern Poland at the time -- imagine Twin Peaks, but with fewer people and bigger hills). Sometimes getting a copy of quake or Win 95 took several trips back and forth as I didn't have enough floppies and the ones I had would get corrupted.
I turned 10 and finally decided to disassemble the drive and try to fix it. I found the culprit, but I realised that I needed a lubricant for one of the gears. In that exact moment my little brother was just passing by our "computer room", eating a bun with kielbasa (the smoky, greasy kind which is hard to find outside of PL). I put some of that stuff on a cotton swab, lubricated the gears in the drive, and magically fixed it. The drive outlived my old computer (may it rest in pieces). I miss a good Kielbasa Wiejska.
So I took double sided tape, stuck a plastic gear on the wheel and put a servo on with another gear on the side, connected to a raspberry pi, that would turn the wheel when my phone would enter a geofence around the flat.
Picture: https://ibb.co/nDvwndp
I even had a bash script to calibrate the servo which would turn the wheel and ask which temperature it set, so it could figure out the step size.
Adult me is both horrified and impressed at this creation.
That SSD doesn't even have a file system on it, instead it directly stores one monstrous struct array filled with data. There's also no recovery, if the SSD breaks you need to recover all data from a backup.
But it works and it's mind-boggingly fast and cheap.
Well one day a mistake was finally made. Some of the devices went into a sort of loop. They’d start running the important process, something would go wrong, and they’d just retry every few minutes.
We caught the issue almost instantly since we were watching the deploy, and were able to stop updates before any other devices picked it up. But those that already got it were down.
We could ask the devices to send us the output of a command remotely, but it was too limited to be able to send back an error log. We didn’t have time to send back like 255 characters at a time or whatever, we needed to get it fixed ASAP.
And that’s when the genius stupid hack was suggested. While we couldn’t send up a full log file, we could certainly send up something the length of a URL. So what if we sent down a little command to stick the log on PasteBin and send up the URL to us?
Worked like a charm. We could identify what was going wrong and fix it in just a few minutes. After some quick (but VERY thorough) testing everything was fixed.
So I downloaded the HTML for all pages required for this exact flow and removed the previous sick days. I then changed my etc/hosts file, gave them my computer and prayed that they wouldn’t try to visit any other page than the ones I downloaded.
Worked like a charm. Later I called in sick myself.
Every time it happened, it made for a long heat up cycle to warm the water and rads and eventually the house.
So I built an Arduino controlled NC relay that removed power for 1 minute out of every 120. That was often enough to eliminate the effect of the fault, but not so often that I had concerns about filling too much gas if the boiler ever failed to ignite. 12 failed ignitions per day wouldn’t give a build up to be worried about.
That ~20 lines of code kept it working for several years until the boiler was replaced.
It was a project scheduled for 2-3 months, for a large corporation. The customer wanted a button that a user would click in the old system, requesting a record to be copied over to the new system (Dynamics CRM). Since the systems would be used in parallel for a time, it could be done repeatedly, with later clicks of the button sending updates to the new system.
I designed it to run on an integration server in a dedicated WS, nothing extraordinary. But 3 days before the scheduled end of the project, it became clear that the customer simply will not have the server to run the WS on. They were incapable of provisioning it and configuring the network.
So I came up with a silly solution: hey, the user will already be logged in to both systems, so let's do it in their browser. The user clicked the button in the old system, which invoked a javascript that prepared the data to migrate into a payload (data -> JSON -> Base64 -> URL escape) and GET-ed it in a URL parameter onto a 'New Record' creation form into the new system. That entire record type was just my shim; when its form loaded, it woke another javascript up, which triggered a Save, which triggered a server-side plugin that decoded and parsed the data, which then processed them, triggering like 30 other plugins that were already there - some of them sending data on into a different system.
I coded this over the weekend and handed it in, with the caveat that since it has to be a GET request, it simply will not work if the data payload exceeds the maximum URL length allowed by the server, ha ha. You will not be surprised to learn the payload contained large HTMLs from rich text editors, so it did happen a few times. But it ran successfully for over a year until the old system eventually was fully deprecated.
(Shout out to my boss, who was grateful for the solution and automatically offered to pay for the overtime.)
The CMS was absolutely terrible to work in. Just one small example: It forced every paragraph into a new textarea, so if you were editing a longer news story with 30 or 40 paragraphs, you had to work with 30 or 40 separate textareas.
So I basically built a shadow CMS on top of the crappy CMS, via a browser extension. It was slick, it increased productivity, it decreased frustration among the editors, and it solved a real business problem.
If we had had a security team, I'm sure they would have shut it down quickly. But the company didn't want to pay for that, either!
Eventually the cause was narrowed down to that, randomly when the machine was stressed, the second half (actually, the final 2052 bytes) of some physical page in memory would get zeroed out. This wasn't great for the indexservers but they survived due to the defensive way that they accessed their data. But when we tried to use these new machines for Gmail, it was disastrous - random zeroing of general process code/data or even kernel data meant things were crashing hard.
We noticed from the kernel panic dumps (Google had a feature that sent kernel panics over the network to a central collector, which got a lot of use around this time) that a small number of pages were showing up in crash dump registers far more often than would statistically be expected. This suggested that the zeroing wasn't completely random. So we added a list of "bad pages" that would be forcefully removed from the kernel's allocator at boot time, so those pages would never be allocated for the kernel or any process. Any time we saw more than a few instances of some page address in a kernel panic dump, we added it to the list for the next kernel build. Like magic, this dropped the rate of crashes down into the noise level.
The root cause of the problem was never really determined (probably some kind of chipset bug) and those machines are long obsolete now. But it was somehow discovered that if you reset the machine via poking some register in the northbridge rather than via the normal reset mechanism, the problem went away entirely. So for years the Google bootup scripts included a check for this kind of CPU/chipset, followed by a check of how the last reset had been performed (via a marker file) and if it wasn't the special hard reset, adding the marker file and poking the northbridge to reset again. These machines took far far longer than any other machines in the fleet to reboot due to these extra checks and the double reboot, but it worked.
It wasn't pretty, but the hackers never got through again, and that clunky thing is still in service today. I coded it to quarantine all illicit file uploads, and as a consequence I have many thousands of script kiddies' PHP dashboards from over the years.
He runs his entire business off the contact list in the software. Customer has over 50,000 contacts, which the software supports a max of 8k. I got around this limit by using the "Profiles" feature in the software, which allows you to configure multiple users-Each user allowed max of 8k contacts.
I created 10 profiles by dividing the English alphabet into 10 groups based on the frequency of last name first letter in his set of contacts:
Contact Group 1: A, K, T Contact Group 2: B, L, U Contact Group 3: C, M, V Contact Group 4: D, N, W Contact Group 5: E, O, X Contact Group 6: F, P, Y Contact Group 7: G, Q, Z Contact Group 8: H, R Contact Group 9: I, S Contact Group 10: J
Of course, when he needs to search or get data from his contacts, he needs to constantly switch profiles. I can only imagine how mundane that must be, but whatever floats your boat.
I reasoned that the motor holding it shut had mostly failed in some way, but could still exert some force. So I popped the cover off the drive, took out the magnet that holds it shut, cooked it on a gas stove for a few seconds, and put it back in.
Th Curie temperature for a neodymium magnet is a few hundred of degrees, but practically speaking, they will lose a lot of magnetism even at lower temperatures. Popped it back in and the drive worked for another year or so.
Years later I went back (like 10 years later?) and they showed me a sleek new tiny thin client box, and when it loaded there was my VB6 screen with the familiar two buttons. Apparently the users loved it and so they had ported it across to the newer devices ever since.
My first residential plumbing fix was a simple unclogging of a shower drain, which was accomplished by putting a couple inches of water in the tub and blasting repeatedly into the drain with a pneumatic cannon. The splatters hit the ceiling, but it was thoroughly unclogged after a few repetitions.
The second major one, was the sewer line at mom's place, which ended up being very bad clog that nasty industrial drain cleaner would not even touch. I capped every drain and vent in the entire house, and dumped two 30 gallon compressor tanks of air into the two toilet hookups. There was great and concerning watery rumbling from all through the house, but after a 30 seconds or so, the clog was blown into the septic and has not given her a single problem since then.
For all its faults, it worked and it was generating revenue. Enough that the startup got to a sizable Series A. That experience completely changed how I think about writing code at startups.
There was a VERY limited scripting language for the server. This had some commands but missed basics like variables. However you could track the number of NPCs in a room.
So this led to some rather interesting new primitives:
- "Chickens in a room" became my preferred method of counting / implementing any kind of persistent global state.
- For timers I would spawn an enemy and a guard. When the guard killed the enemy the timer was up. (The number of NPCs dropped back to 1).
Combined these concepts to implement "dungeon instances": copied the map (as in literally copied the file) 10 times, used chickens-in-a-room to track which one players would enter, and whether you were entering the current instance or incrementing to the next was determined by the guard-kills-mob timer.
The process of authenticating with the CRM was complex and there wasn't a way to get anything at print time and most of the data was stored all over the place.
But I found the printed report knew almost everything I wanted, and you could add web images to the paperwork system. So I added a tiny image with variable names like "{order_number}.jpg?ref={XXX}&per={YYY}" and then one for each looped product like {order_number}/{sku}.jpg?count={X}&text=..." etc. After a few stupid issues (like no support for https, and numbers sometimes being european format) it was working and has remained solid ever since. Live time stamped data, updates if people print twice, gives us everything we wanted just by a very silly method.
The users of the green-screen system would tell it to print a document by picking which template to use from a menu. The system would generate an XML file in a directory that was shared out via Samba. A VB6 program watched that directory for these XML files to appear, when one appeared it would figure out what the relevant template was, use COM automation to tell MS Word to load the template, fill in the template fields, save it to that client's folder on the file server, then print (on the user's selected printer in the green-screen system) two copies (one for the paper file, one to mail) and an envelope.
There were a bunch of weird word processing practices that made it slightly worse than it already sounds. Each letter that was sent out was actually made of a letterhead (one for each location we operated) and a body with the standard text for various letters we sent. The body would sometimes contain links to other documents (e.g., forms we were requesting someone to fill out), the program would follow these links and print those documents too, but only one copy as we didn't need a bunch of blank forms on the paper file.
There was also an Access database used by this VB6 program to maintain various bits of configuration data - mappings of document codes to filenames, mappings of green-screen printer names to Windows printer names, etc. Access gets a bad rap, but it made maintaining that configuration data a breeze.
It was horrific, but it saved everyone an incredible amount of time.
To troubleshoot it, we installed Microsoft Network Monitor 2.0 (this was well before Wireshark...) on a few workstations. NM2 installed a packet capture driver and a GUID front-end. And...the problem went away.
Our best guess was that the problem was some sort of race condition and installing the packet capture driver was enough to change the timing and make the problem go away. The customer didn't want to spend more time on it so they installed NM2 everywhere and closed the case.
I occasionally imagine somebody trying to figure out why they're still installing the NM2 driver everywhere.
This machine didn’t have that capability nor any obvious extension points. I ended up writing a VB app that would pill the serial port, which the machine used to talk to the control system, and if the serial port was busy and then became free, send an email. Email was sent by writing a very simple SNMP client.
That program ended up working for another, lower cost 3D printer that we acquired later as well.
I ended up extending for a 3ᴿᴰ printer to tail its log and look for a message it emitted when prints finished.
We share it with a few places and got one of the printer companies to add email support later.
So sprinkled throughout our very large app were thousands of dollar signs - some rendered on page load in the HTML, some constructed via JS templates, some built on graphs in d3.js, and some built in tooltips that occassionally popped up on the aforementioned graphs.
One day, a Sales guy pops in with "Hey, I just sold our app to a big player in Europe - but they need the currency to display in Pounds instead of Dollars" (might have been pounds, might have been some other European currency - memory is a bit hazy).
CEO steps in and supports Sales guy, says their demo starts in a few days - and the demo needs to take place with the client's data, on their instance, and show Pounds instead of Dollars.
Wat?
Small dev team, 5 members. We gather and brainstorm for a couple hours. Lots of solutions are proposed, and discarded. We get access to client's instance to start "setting things up" and poke around a bit.
We discover that all the field names are the same, and SF was just storing them as numbers. No currency conversions had to be done. We literally just needed to display the pound symbol instead of the dollar symbol.
One of the devs on my team says "Hey guys, I have a dumb idea..."
In short, he remembered an extension from back in the day called "Cloud2Butt". When you loaded a page, it would scan all html on the page and transparently and instantly replace all instances of the word "Cloud" with the word "Butt". Recollecting this, the dev wondered if we could look at their code, and write something similar to crawl the DOM tree and just replace all dollar symbols with pound symbols. The resulting "fix" would then just be a simple JS function we put on top of our stack, instead of refactoring thousands of files.
So... we tried it. With one small addition (making it do it on a setInterval every 100ms, which took care of the tooltips on the graphs) it worked flawlessly. We intended it as a stopgap measure to buy us time, but there were no complaints so we just let that run in production for several years, and the app expanded to several more currencies.
While as a new DBA for Microsoft SQL server on a team of very seasoned DBAs with decades of experience each I wrote a small Python script to fix a replication bug that MS refused to fix.
The bug iirc was something related to initial snapshot settings such that by default a setting was set to off so replication would fail. They would normally go and edit this file manually when this happened. When it really blew up it could be editing this file in tens of locations! It was just something they had consigned themselves to doing since it happened just infrequently enough for them not to invest time to fixing but when it did happen it could take an hour or so to do and yeah my noob self thought that’s far too tedious I am not doing that.
The bug really should have been fixed by MS and the flag should have been set to on and my script would just find the relavent unc share and text file and basically do a find and replace of that line and toggled the flag. And then replication would work again. I could point the script to a server and it would figure out the path and do the work. All I then had to do was enumerate the servers that were affected and it was fixed in no time.
This fix was so popular when I showed it internally that they asked me to turn it into a GUI application. It was awesome. I learned a bit of C# and from what I heard a few years back my little tool was still in USE! Huzzah
My siblings and I had built a 486DX/40. Intel's chips topped out at 33MHz until you got into the clock-doubled DX2/50 (which ran its bus at 25 MHz) or the DX2/66 (33). But AMD's DX/40 wasn't clock-doubled, the core and the bus both ran at 40MHz. In the days before accelerated graphics, all pixel-pushing went through the CPU, so this was a very big deal for games.
It also ran hot enough that a CPU fan, optional-but-a-good-idea on slower chips, was an absolute necessity here. But the fan (a little 40mm Crystal Cooler™) would never be described as silent.
So when I wasn't gaming, I'd remove the 3.5" floppy drive from the case (it was in a modular sled, and the cables had enough slack that I could just unplug it), then reach in through the resulting opening, and re-jumper the clock-generator IC on the motherboard. Live, and blind. I knew that moving the jumper-cap one position towards me would go from 40MHz to 8, and that was enough to seriously drop the heat. The first time I did this I was flabbergasted that the machine didn't crash, but evidently it was fine? The no-jumper state was something like 12MHz or whatever, so presumably it was just blipping through that speed as I moved the jumper.
Then I'd unplug the CPU fan and enjoy the silence. This was particularly nice during a long download, where servicing the UART ISR didn't exactly take much processor time.
Even better, the computer room was in the basement, and the FM radio signal from my favorite station was pretty weak. So weak that, if I tuned just a smidge off to the side of the station, my radio receiver would pick up little ticks of interference from the computer, while still being able to hear the music. This meant I could turn off the monitor too, and just listen to music while downloading whatever file.
When the ticks stopped, the UART buffer was no longer triggering an interrupt, meaning the download was done, so I could turn the monitor back on and resume my BBS session, clock the CPU back up and plug the fan back in to unzip the file in a reasonable amount of time, and otherwise get on with my day.
I realized the toilet tank is self-refilling because of the float valve and won't overflow . So clean it out and a good place to pump my submersible pumps.
So I devised a stupidly simple way: add && echo "asdfasdfsadf" after each RUN command. I mashed the keyboard each time to come up with some nonsense token. That way, docker would see RUN lines as different each time it built, which would prevent it using the cached layer, and thus would provide the commands' output.
I wrote the same thing (more completely) here: https://stackoverflow.com/a/73893889/5783745
(a comment on that answer provides an even better solution - use the timestamp to generate the nonsense token for you)
As stupid as this solution is, I've yet to find a better way.
I created a flare gun weapon (similar to the stick rail gun missiles so nothing to crazy here) but found that if a player died when they respawned the flares were still stuck on them and damaging them when they respawned even though their whole location had changed. This bug would exist with rail gun missiles as well but since the death animation was long and the fuse so sort it would never present in the base game.
I experimented with using detach commands that ran on player death but they'd just instantly reattach to the player model because of their proximity. I ended up creating an invisible explosive entity that fired on player death from the center of the player which did a damage flag ignored by players but which destroyed the flares.
I attached gdb to the launcher, “print = Question also made me think of the last-minute change we needed to make to a database’s structure to avoid about 6 million lost user updates when all DB admins were out with no password. That was a fun one too. Not sure I should admit how we managed it.
Eventually I read a random post on Reddit[0] how a guy tried putting it his SP3 in the freezer (in a sealed freezer bag) and it eventually came back to life.
I skeptically tried it. I thought freezing was more likely to destroy something else on it, but what did I have to lose? It was already dead.
I put it in a sealed freezer bag with as much air removed as possible, and then put it in the freezer for a couple of hours. I took it out, plugged power in, and I was able to turn it on!
The first time I did it, it only worked for a few days. I tried freezing it again, and that worked. It still works to this day, many years later. My theory is that it seemed like a power problem, and that freezing the battery put the chemistry through some kind of cycle that repaired it.
I did keep the Surface wrapped in layers of towel to warm it back up slowly. Mostly to help prevent any moisture from building up somewhere before it got to temperature.
[0] https://www.reddit.com/r/Surface/comments/5tficj/how_i_reviv...
Eventually a 3"(!) disk drive was launched which needed some extra space in the upper memory banks to host its driver.
This made it almost impossible to copy certain games (ok, it was Spindizzy) from tape to disk, since there was no longer enough memory to load the game without overwriting the disk driver.
Almost impossible, until I split up the loading process and used the only other remaining RAM possible: the graphics buffer. So while loading the game your whole screen got distorted pretty bad, but it worked: I copied the graphics buffer right over the precious disk driver and the game worked just fine.
I ended up swapping it out to a generic in-line CPAP humidifier, but at the same time, realized I could partially automate the process of refilling the chamber (and not have to keep unhooking hoses) by adding an in-line oxygen tee, some aquarium plumbing, a check valve, and a 12 volt pump and switch.
In the morning I just hold a button and the tank magically refills itself ;)
Introducing Semi-Autofill(tm): https://i.ibb.co/NmDbVvw/autofill.png
(Also: The Dreamstation, while recalled, was personally de-foamed and repaired myself -- I don't trust Philips any further than I can throw them now. I now self-service my gear.)
In my voyages of curiosity, I happened to have a gParted liveUSB laying around. I booted into it, took a look at the main drive on the laptop, and I noticed a few KB or MB of free space before the main partition. I thought this was weird for some reason and decided to use gParted to "shift" the whole partition to the left (the beginning of the drive). After some time it finished and booted up into my Windows install with zero issues.
To this day, I am equally baffled and amazed that it worked.
Turns out, if you want to turn html+css into pdfs quickly, doing via a browser engine is a "works really well" story.
Instead of tasking someone to move the mouse once in a while, we taped a wired, optical mouse to the tower PC at the front. At the time, CD/DVD players were still abundant, so this PC also contained one. We did know about the eject(1) tool, as well as its option to close the tray again (eject -t). From that moment on, the solution for our screen blanking problem was fixed:
while true; do eject /dev/cdrom; sleep 5; eject -t /dev/cdrom; sleep 30; done;
(this would eject the CD/DVD drive, wait five seconds, then close it again, and wait 30 seconds before starting the same sequence again)This would cause the optical mouse to detect movement, send this to the PC, and in the end keep the projector on.
* we would later discover the xset(1) tool and its DPMS/screen saver settings. But that would've eliminated the periodic, comedic noise of the CD/DVD drive and the wired mouse bumping on the tower PC, so we kept the while loop intact during the rest of the LAN party.
The one we picked had good API docs, but we didn't read the fine print - API access was a high yearly fee, costing almost as much as the regular subscription fee.
Their web interface functionality for importing orders/invoices from a CSV file, and looking at the browser requests I could see they were simply using the API in their frontend. A couple of hours later, and we had our invoice import job doing POST requests to their login page, getting the right cookies back and uploading invoice files.
Worked fine for years, only requiring a couple of updates when they changed their login page.
You'd be surprised at the hacks required when interacting with scientific instrumentation. I am not a hacker at heart, but I do take pride when I'm able to wrap a hack such that you'd never know what it was doing underneath. Leaky hacks are no fun.
I ended up sanding the fan's plastic shroud for two hours to get it to fit. It's still in my desktop right now and I won't ever be able to get it out because it's just snug enough to go in.
So we took Zink, the Mesa OpenGL driver that emits Vulkan and slightly massaged it for our needs. I ended up having to implement a few missing GL extensions and also changing the Zink code a little bit so it would use our Vulkan device and instance instead of making its own. As far as plugins are concerned, there is still a real libGL.so/opengl32.dll just like there always was. But internally it's now emitting Vulkan calls against the device we use for our game. Solved both perf and interop issues and it's been one of my prouder achievements.
The theory was that the Atari spends a good chunk (30%) of its time for display memory access. That can be disabled (making a black screen) and re-enabled. My pre-boot program installed a vertical blank interrupt handler reading the 2nd joystick port: up/down for display on/off. After installing the handler, the program waited for diskette swap and pretended to be the original program loader reading the disk layout into memory and jumping to the start. Worked like a charm first go.
The first iteration hack was done in minutes. over time it could OCR almost anything and convert to ebook readable.
I figured out a way to have a desktop computer on the intranet, a python script to scrape lotus notes database and some way to push all this data to Teams list every 5-10 mins. Eventually the teams api team shut my access but fwiw I managed to get the list to sync with a teams channel for few months straight. Hacky, stupid and dumb. that’s how everyone could check the status without needing to open an app.
So after the MSP basically stopped responding to them they asked us if we could upgrade "without" SA - our company's position was it was their data so anything they wanted us to do with it was fine.
So I had a conundrum - how do I get SA without SA? Well - I knew we had one loophole - we used xp_cmdshell for some critical features of the app, and an unprivileged user can run it.
If you're not familiar with xp_cmdshell it basically is a stored procedure that passes your commands out to a windows shell, but its pretty functionally limited on purpose.
I wanted to copy the data, move it around, make a backup, send that to a place, and so I wrote that code in powershell, then base64 encoded it (because it needed to survive shell encoding problems), then chunked it across the wire(because length problems), reassembled it, and then executed it with xp_cmdshell.
Worked like a charm.
1. I got to my car and the battery was flat. Fortunately, I had ridden my home-built electric skateboard to it (this was before even the first Boosted Boards came out - completely homemade, I built the trucks, drive system, etc). I went to the Goodwill next door and bought an extension cord for the wire. I stripped the cord, wired the board's battery to the car, and was able to start it.
2. I was driving my classic car home when the alternator failed, at 2am, in the middle of a big city. With my headlights rapidly dimming, I managed to quickly exit and find a parking lot (just as my headlights completely died). Fortunately, I again had that skateboard. I rigged up the battery to the electrical system and made it to within a mile of my house before the fuel pump and spark ignition gave out completely. I easily walked home, put a new battery in my backup car, and I was good.
The moral? Replace your car batteries when they're weak :) Also, LiPo batteries are beasts.
In hindsight, it would have been better to use a local HTTP server. Seemed like overkill at the time.
But there were a bunch of issues we had to deal with:
- To render the gutter (margin in the middle) you had to know which side of the book each page would fall on.
- To generate the headers and footers, you had to know the exact page number for each of the pages.
- You had to know how many pages the table of contents would take up, but you couldn't know the page numbers for each chapter until the book was fully generated.
What I ended up doing was to generate multiple PDFs for each chapter, header, footer, and table of contents separately, then stitching them together very carefully to build the final export. Super hacky, but it ain't stupid if it works!
After a month or so I started to notice that something is wrong with performance. Figured out that every `object.field` access through a COM proxy takes exactly 1ms. Once there’s enough data, these dots add up to tens of seconds.
>_<
Instead of doing a rewrite I just pushed as much of js logic as possible beyond the COM, so there’s only a constant or little amount of `a.b.c` accesses on my side. Had to write a json encoder and object serialization inside the old app to collect and pass all the data in one go.
The web app was abandoned few months later for unrelated reasons.
I ended up running the program in MS-DOS, inside VirtualBox. I used MS-DOS to route the "printer" to COM1. Then on the outside, I routed the serial output to a file on the host machine.
He just clicked the shortcut that fired it all off, did the calculations he needed to do, and "printed it", then exited Tk!Solver. The batch file then checked for output, and if it was there, it printed it.
He was happy, and it only took about an hour to get it all working.
I had to get past a captcha for automation and the solution I came up with was to always choose 2. If it was incorrect, just request a new captcha until it passed. For some reason, 2 was the answer most of the time so it actually rarely had to retry anyways
Guess how I implemented my main character.
So I dynamically replaced the part of their code that was wrong. That monkey patch has run years and is still going :)
A very very oldschool sys admin helped out by opened up the poor machine, opened the HDD and then powered up the machine, given the disk a good spin at the right moment. The disk came back to life and so did the server, and we agreed to replace the machine with a new, proper server ASAP. As i left the organisation 2 years later the machine was still operational, the disk still spinning (ZERO defects!) and the disk was still open inside the case - running non-stop-no-problemo for 2+ years!
I also miss Lotus Notes - it was an amazing concept in many regards that got murdered by IBM.
And in case you ask: the machine was running Windows NT 4 Server.
The monthly database textfile is not that large, but it is unwieldy.
I'm a web consultant, but database backends are not yet a thing, at least not for us. Static webpages, all the way down.
So I use a script to parse the database into a series of text files and directories. E.g. JFK/index.html is a list of all the airports receiving flights from JFK, e.g. LAX, SFO, etc. And JFK/LAX.html is that month's results for all JFK to LAX flights. Etc.
As I recall, once I'd worked it out, it took 15 minutes to generate all those files on my Mac laptop, and then a little ftp action got the job done. Worked great, but someone did complain that we were polluting search results with so many pages for LAX, SFO, etc. etc. (SEO, sadly, was not really on our radar.)
That was replaced within a year by a php setup with a proper Oracle backend, and I had to explain to a DB admin what a weighted average was, but that's another story.
Given that the new charts were rendered on the client, this seemed to be an impossible ask -- certainly the client didn't expect us to be able to solve it, once they realised their mistake.
I bodged together the standalone Flash player with a headless XServer and some extra logic in Flex, so it would repeatedly hit an endpoint, render a chart with the data returned, then post it back up to another endpoint. It took a couple of rounds of back-and-forth with their IT folk, but it worked! And we heard a couple of years later that it was still running happily in production.
For several years I left "Adobe Flex" off my resume, I hope it's dead enough now that I can safely admit to having known how to develop for it. I'm still quite proud of having invented the monstrosity that was "Server-side Flash".
We were behind schedule and had, I think, three separately implemented/maintained/deployed services that needed to be able to access the internet to do their work. Rather than implementing the intended auth mechanism in each service, writing tests for it, going through code review, and redeploying, I instead added nginx to the base Docker image they all used, configured them to send requests to that nginx instead of as normal, and made that nginx instance man-in-the-middle our own services to attach a hardcoded HTTP header with the right creds.
I man-in-the-middled my own services as a hack - dumb but it worked. It was meant as a quick hack but stayed for I think a couple years. It did end up being eventually being the source of an outage that took a week to diagnose, but that's a different story.
You email specific email addresses which get processed as web pages, blog posts, or attachments extracted to the cdn. All other emails sent to your domain sit inside the inbox. You can also send out emails from any address.
The blog and webpages are all SEO optimised so you can share the link on say Twitter and it will unfurl the link and read the meta tags.
You can also forward specific emails to a special address to be shared or bookmarked in your browser.
The entire thing runs off Lambdas, S3, Cognito, and AWS SES, nary a database. I use pug template files to format content extracted from emails.
To make this work, I had to do a deep dive into how Gmail’s email composer translates actions into HTML tags, then align the templates to these behaviours.
For a while, I had a handful of paying customers which paid my AWS bills. Right now, I’m down to one customer and the rest of my uses are for personal projects.
I learnt a lot in the process - from templating to SEO to S3 usage to Lambdas and got a very usable domain level email inbox and blog out of it. The CDN and static pages are a little less useful but building them too was quite fun.
Btw, highly recommend nodemailer as a module for email parsing.
Later found out that Costco has price adjustment policy if price of an item is reduced within 30 days of purchase.
Created simple app to tell me if the price is reduced. Its not awesome, but it works. I have got a few 100 dollars so far :)
Anyway, we eventually developed a feature that allowed the application to copy out the lab process and equipment list to a separate database, zip it up, and FTP it to a server. It would also export a CSV file with the name of the lab, salesperson, date of sale, and other searchable information and FTP that up too. I wrote a "web service" (this was late 90s, before that term was cool) that would collect up these CSV files, aggregate them in one big CSV file, and then from within that application that CSV file could be searched and the appropriate ZIP file downloaded and merged with the local database. It was written in Perl and ran as a CGI script on some internal Windows NT machine's IIS instance.
It was janky as all get-out, but it worked and we did it a couple of years before the big web services mania hit.
In the text file you have something you want to template (or "parametrize") from an outside variable, so you name that something like @@VAR@@ and then you can sed that @@VAR@@ :-)
This system was built in Java and was launched using a simple shell script that used a `for` loop to build the classpath by looping over all the JARs in the lib folder and appending them to a shell variable.
With the release "process" in mind, hotfixes and patches had to be kept as simple as possible. To release a hotfix we would JAR up the one or two classes that needed to be patched and into a single JAR file, then modify the startup script to pre-pend that JAR file to the classpath so that the patched classes were found first, taking advantage of Java's classpath loading algorithm.
It was a big old-fashioned bookseller trying to compete with Amazon. Software and the web was locked down tight, but they opened a daily report in Excel, and I built a VBA macro that generated the necessary HTML and published the images to an FTP server. Turned a 2 day job into a 10 minute one.
I didn't know how to set up a centralized service to handle server discovery ourselves, so I had each server serialize, compress, and base64 its metadata and store it in some "rules" field in the Steam API. Problem was that the rules field was a list of indeterminate length of strings of indeterminate length. Absolutely no documentation, so I had to brute force it to find the limits. It was just barely enough.
So the client would fetch the full list of servers, filter on the few parameters steam natively supported, then they'd fetch the metadata for every remaining server.
Honestly I feel really bad about this one. It was a bad solution but it worked for years.
I left a root prompt in one of the VTY's so people could mess around with it, but it being a net-booted BusyBox shell, there wasn't much to play with. We weren't concerned about MITM, because the network was token ring - in the mid 2000s. Convention full of hackers, and a bunch of machines with root shells open, but nobody hacked it, because nobody could get on the network.
We shipped all the gear up on an Amtrak from Florida to NYC. We had no budget nor interest in shipping them back or storing them, so we stacked them on the sidewalk and started yelling about free computers. In two hours they were gone.
Basically awk would match `/^FOO / { system("foo.exe $0") }`
...you could get pretty darned far with that mechanism, for near minimal amounts of code.
Any time you pressed "enter" on a line of text in vim, it'd get thrown through that awk script.
If a line matched a command matched in the awk file (think GET, POST, SEARCH, ADD, etc), it'd execute that awk block, which was often just calling over to another executable which did the searching, adding, etc.
The interesting thing about it was using it as a UI... you could basically "expand in place" any particular line... have commands return subsequent commands to pick from, etc.
Plus the ability to "undo" via vim commands and the fluency of effectively an ad-hoc REPL was a really liberating experience.
I only noticed after business hours that it is necessary to use DOS software build by the professor, and it is impossible to buy it during the weekends. It was available in the computer rooms, but I had no access. I got a copy of the SW in the net, but needed a license key. It was based on a challenge. Luckily I had a friend who had the SW, and could give me a sample license key. I figured the challange would be based on time, and I was right, it was using epoch as a seed. So I made a script that booted a DOS box, and brute forced one license key a time. It took me a few hours but I succeeded in cracking the SW so I could start the excercise.
To hide the lips not matching the audio, she did a little wave that covered her mouth at just the right time.
Example: https://www.loom.com/share/3162a767905c422b8fd423f7448e16f8
It ended up working so well it generated us over $500k in new business
And I ended up turning it into a SaaS (https://dopplio.com)
I used a live CD to boot it, but could not fix the boot partition.
Since I was able to read the root partition I chrooted into it and started nginx from there.
It ran like that for a week while I was preparing a replacement.
- Apple II-based database used by a choir teacher to track the school music library
- MS-DOS accounting software payroll reports to generate W-2 forms
- Patient records from a pediatric office
- Receipt printer on a gas station pump controller
- Customer transactions and balances from a home heating propane supplier
- Customer transactions from unattended fueling site controllers
You might think this only applies to old software, but often "printing" to a Generic/text-only printer in Windows gives good results.
Our work around was to configure HA proxy to be a reverse load balancer and do creative packet forwarding. Need to access an Oracle database on prem? Bind port 8877 to point that that databases IP on port 1521 and submit a firewall rule request.
Years ago I was working on developing a new cloud native service. The particular microservice I was working on had to call out to multiple other services, depending on the user parameters. Java 8 had just come out and I implemented what I thought was an elegant way to spin up threads to make those downstream requests and then combine the results using these fancy new Java 8 stream APIs.
I realized at some point that there was a case where the user would want none of those downstream features, in which case my implementation would spin up a thread that would immediately exit because there was nothing to do. I spent a couple days trying to maintain (what I saw as) the elegance of the implementation while also trying to optimize this case to make it not create threads for no reason.
After a couple days I realized that I was spending my time to try to make the system sometimes do nothing. When I phrased it that way to myself, I had no problem moving on to more pressing issues - the implementation stayed as is because it worked and was easy to read/understand/maintain.
To this day, I avoid the trap of "sometimes make the system do nothing". One day, that performance optimization will be necessary, but that day has not yet arrived in the ~7 years since then.
It seems to have worked out. [1]
[1] - https://github.com/openzfs/zfs/commit/f375b23c026aec00cc9527...
I use to work part time restoring Fiat, Porsche, and VW rares for an old head out in the mid west, lots of "stupid but works" in those old cars... Mercedes Benz once (1980s or so) employed glass containers to solve for fuel pressure problems. Insane coolant loop designs or early Fuel Injection systems that develop "Ghosts" lol...
Monday morning, minutes before go live we do a couple more tests with staff and discover a critical timezone bug that if not fixed, we cannot go live. Even if our developers were able to quickly turn around a fix and skip formal QA, it still would be ~30mins before everything was updated and another 30 for us to do local tests on site...
So we went around and changed the timezone on every computer in the office. The day was saved, devs fixed the bug during their day, our night and we fixed the timezones the following day.
---
Made a PHP landing page for a customer where they could redeem a coupon code they were sent via snail mail. About 100,000 codes were sent out via USPS.
Threw together the basic code you might expect, simple PHP page + MySQL database. Worked locally because customer was dragging their feet with getting me login creds to their webhost.
Finally, with the cards in the mail, they get me the login creds at 5PMish. I login and there's no database. Cards are going to be arriving in homes as early as 8AM the next day. How TF am I going to make this work... without a database?
Solution... I just hardcoded all 100,000 codes into a giant MySQL array. Or maybe it was a hash/dict or something. I forget.
Anyway, it performed FINE. The first time you used the page it took about 30 seconds to load. But after that, I guess `mod_php` cached it or something, and it was fine. Lookups returned in 100ms or so. Not spectacular but more than performant enough for what we needed.
Got paid. Or, well, my employer did.
That's when I discovered that you could write a GOTO statement in Ruby! I made a very minor addition to the beginning and end of the script, a label at the top, and a goto at the bottom for if the CSV file didn't exist. I had added my email to the list of alerts, and after that GOTO was added, I never saw another alert.
Eventually I realized there were wireless doorbells that had a button whose circuit board was small enough to go into the hole in the wall for the doorbell button. I found one like that on Amazon, took off the wireless button casing, put the board inside the doorbell hole in the wall, and wired the fancy button onto the board. It works perfect.
Eventually I'll have to replace the batteries, but it is easy to access just removing the button. I'm on the first set of batteries 5+ years in now.
The migration was long, tedious, and overly complicated in its own right (i.e. one proposed solution for the "how do we migrate all data safely across the continent?" question involved armored trucks) but just as we reached the T-1 day mark, I realised we had forgot something.
The customer was regulated by various entities, and so it had to deliver periodic audit logs in a particular format. The raw logs (stored in a cloud hosted bucket) would not be sufficient and had to be parsed; in order to process the logs into the desired format, the customer wrote thousands of lines of code in the platform that I was in the process of migrating. This code could only run on the platform, due to some other esoteric privacy regulation.
So there I was on a Sunday, with :
- a few hours to deliver up-to-date, formatted audit logs to regulatory entities or risk legal action
- raw logs in a cloud bucket that required ingestion and processing
- a new cloud platform that could process the logs but was unable to ingest data from the other provider's cloud bucket (due to some temporary allowlisting ingress/egress issue and this being one of the first migrations onto the new cloud)
- an onprem platform being decommissioned and no longer allowed to process the logs BUT capable of ingesting them
The solution I came up with was to have the data flowing:
log bucket in cloud provider A -> decomissioned platform running onprem -> connector I wrote that evening and had no time to test -> platform running on cloud provider B
The ship was afloat the next morning and everything was in order despite cutting it close; I am now a big fan of exhaustive planning, months in advance.
The only way we could access the core database was through a mainframe terminal emulator only available on client PCs across the internal network. It was basically an old school terminal based UI where you had to enter specific codes and sequences of keys to navigate.
It was not supposed to be automatable, and we did not have permission to deploy any executable to computers on the network.
However, we found a way to plug into it by calling an obscure DLL via Internet Explorer's ActiveX. From there, we had access to only two functions: one to send a sequence of key strokes to the emulator, and another which was basically getString(x, y, length).
We built whole applications using this database, only via those two functions, which led to giant procedures for each query we had to do. It was terrible, unstable and slow, and broke at every update, but it did the work.
I wanted our employees to be able to roam to that meeting room transparently without any hassle. I knew that OpenVPN had a layer 2 tunneling mode, that could bridge two ethernet networks over VPN. With two leftover workstations, I set up an OpenVPN server in the main office, and an OpenVPN client at the meeting room. By bridging the OpenVPN interface to the ethernet interface on the client, I was able to connect a switch, WiFi access point and videoconferencing equipment. Everything worked perfectly, with even DHCP requests going over the VPN.
I chose to not use any boilerplate, frameworks or libraries as long as I can get along. Did not respect any ES6 whatever limitations of browsers and used whatever I wanted to use. Just pure PHP, sqlite, vanilla JS and CSS like in the old days with ES6 flavour here and there :-) What sounded really stupid because I ignored all the fancy frameworks made me learn a lot of things and it also turned out that it just works on my systems (Android, iOS, Linux, Window with Firefox, Safari and Chrome). Maintenance will be a nightmare though...
Caution: Use it at your own risk because there will pretty likely breaking changes in the near future :-)
Yes, we want to add a robust CSP, but we currently have some limitations/requirements that make implementation more challenging.
So, instead of making an interface for data entry and then a system to print the forms, the data entry UI for each form looked exactly the same as the forms themselves. Scrolling was needed because at the time there were only low resolution CRT screens.
However, for printing, I would draw the filled out form at a very high resolution in video memory "off screen" and print that.
So, the work to create one form resulted in supporting both data entry and printing.
It turned out that since the people doing the data entry also knew the forms really well, they were able to enter the data 2.5 times faster than initial estimates.
We wanted to have a compile time map of this data and a graph of the sub jobs. To do this, I tossed together a thirty minute script that took the source of our job functions and then ran a few regular expressions on them to extract the data we needed. It was filthy. Regex on source files just feels wrong. Problem is, it's worked great for the last six months and it's still going strong, so we can't justify changing it. The generated data has been extremely valuable for optimizing data fetching paths for customers.
The CMS didn't support any kind of HTML/JS embed, and had quite a short character limit per "Text" block. But luckily, that block didn't filter out _all_ inline HTML - only some characters.
So, a bootstrap "Loading" element was inserted, along with a script tag which would bring in the rest of the resources and insert those in to the page where the bootstrap was placed. This quickly became a versatile, re-usable loader, and allowed us to launch the features. But all this grew from a very inelegant hack which just happened to work.
- got rid of the concept of passwords. 2FA only.
- more of a productivity hack, but - spread sheets with columns that create sql statements, allowing ops to learn/run sql themselves
- bookshelves as standing desks. usb keyboard on lower shelf. laptop at eye level.
- raised an angel round with an android app with beautiful screenshots of the vision. but once installed the app only had a email waitlist implemented
This was the 70’s, so our cards looked like lace doilies (0111).
Being an electronics nerd first, I went around grabbing the hdds (ultimately collected 600+). Took them, a bunch of ATX supplies, and USB adapters to a computer lab. Used some sufficiently thick wire to jump the pins on the PSUs so they’d power up when plugged in.
Plugged in hard drives to lab PCs with USB, into these external power supplies, booted the lab PCs to live USB Linux, ran data recovery over a week, swapping disks as I went. Copied users data to their network drives which was campus policy all along but a lot of people don’t care.
So if your power switch ever breaks on your desktop, use wire to jump pins on the PSU
Building the app was a lot of fun and it worked pretty well most of the time. During beta testing however, we were given all the resources that were crreated by a third party. This mostly included UI elements and other images that made up the UI. Testing it out, again, it worked pretty good. Until one time it didn't...
After about an hour or so of playing, the app would consistently crash. After some OS troubleshooting, we came to the conclusion that apparently Android (at the time) had the habit of not putting images in managed memory, but separately. And whenever this space overflowed, an app would simply crash. To resolve this you would need to manage this space yourself and clear out memory.
However we only discovered this a week or so before the deadline. And implementing memory management would be nigh impossible to do. So I came up with the hackiest solution that I ever built. I added a crash handler to the app, which would start another instance. I also added a serializer / deserializer to the app and whenever you reached the main menu all play progress was serialized to storage. Whenever the app crashed and restarted, this was read again and letting the users resume play. The only side effected was some weird app flickering because of the crash and restart.
A week later when we delivered the app to our clients, they wanted to try it out and play test it. So we did, along with the other group. And lo and behold, after an hour or so the app crashed. And restarted. Unlike the other group, were the app crashed and had to be restarted manually.
In the end the client was really happy with the result. Because it just worked. AFAIK the app is still in production the same way it was about 10 years ago.
We started out having interns do it, but it was taking too long and they were making a lot of mistakes. I ended up writing an AutoHotKey script to copy stuff out of Excel, switch to Outlook, then build the template and save it in the specified format. It required finding a lot of obscure keyboard shortcuts to make it all work, but it got the job done. It was still a bit manual to run, as it was too fragile to let it all go in one go, and I had to watch it. But it turned days or weeks of tedious work into something that only took a few hours once the script was done.
In the same spirit, Microsoft shipped Business Contact Manager for Outlook on Office 2003 CD. That thing is functionally a CRM that was allegedly limited to 5 users. It, of course, runs on an SQL Server and so I made it run under Windows Internal Database, tested it with more than 5 users, and made it a lot more usable.
We wanted to collect some stats about the boards being tested but The internet in the factory was really flakey and we didn't want to pay for a 4G internet plan for a rig that was turned off most of the time.
I eventually went for a cron job that would just try uploading all the local logs to s3 through rsnapshot every 15min. It worked great and was less than 20 lines of shell script.
I ended up using capybara (a Ruby gem for writing browser automation instructions for system tests in web apps) to automate all of the “clicks” to make the updates in the system.
It actually worked pretty well. It wasn’t fast, but it was better than having a human keep the data in sync.
After a few years, the platform released a REST API, and we transitioned to that. But browser automations worked great in the meantime!
edit: spelling
A friend of mine was hit with a "prank" that created a directory hierarchy so deep the OS couldn't get down to the lowest levels of it. Not even `rm -rf` would work, and I couldn't `cd` to the lower levels to start deleting things one at a time.
I realized I could `mv` the top level directory to a sibling of its head, rather than to the child of it, cd into that, delete all the files, then do the same with any subdirs. So I was able to script that and started deleting the files/dirs from the top, rather than the bottom up. Took a while, but worked.
That was also slow, but was able to speed it up by using compound keys and binary search to do range queries of "tables" based on key prefixes (the bulk of the data was time series/event like)
Wasn't able to use indexeddb at the time due to compatibility issues as far as I recall.
But I did notice that the most-recently-added node had 16GiB of physical RAM, for some reason. So I added a hypervisor and spawned two more VMs inside of it, each running ElasticSearch as well, thus increasing our cluster capacity by more than one year of growth!
Wiped the battery leavings off, then drew the circuits back on with a pencil. Then thought to use sandpaper to sever the Windows key so I’d never again minimize during Counter Strike.
Keyboard worked for years after that.
This came about because there wasn't any database on the server and sqlite hadn't come about.
This solution worked quite well for more than 20 years. The file grew to host hundreds of thousands or orders.
My only regret is that I should have charged more. The ROI is unbelievable, the amount I charged is a rounding error. The thing got replaced when the original owner passed away.
eBank AI art generator and social media netwrck.com are running locally on my GPUs.
would have cost fortunes on the cloud
So, we set out to find some way of inferring the timeline from the data itself (RNA-seq and other molecular assays from the blood, in this case). The first thing we tried was to apply some standard methods for "pseudo-time" analysis, but these methods are designed for a different kind of data (single-cell RNA-seq) and turned out not to work on our data: for any given patient, these methods were only slightly better than a coin flip at telling whether Sample 2 should come after Sample 1.
Eventually, we gave up on that and tried to come up with our own method. I can't give the details yet since we're currently in the process of writing the paper, but suffice it to say that the method we landed on was the result of repeatedly applying the principle of "try the stupidest thing that works" at every step: assuming linearity, assuming independence, etc. with no real justification. As an example, we wanted an unbiased estimate of a parameter, and we found one way that consistently overestimated it in simulations and another that consistently underestimated it. So what did we use as our final estimate? Well, the mean of the overestimate and the underestimate, obviously!
All the while I was implementing this method, I was convinced it couldn't possibly work. My boss encouraged me to keep going, and I did. And it's a good thing he did, because this "stupidest possible" method has stood up to every test we've thrown at it. When I first saw the numbers, I was sure I had made an error somewhere, and I went bug hunting. But it works in extensive simulations. It works in in vitro data. It works in our COVID-19 data set. It works in other COVID-19 data sets. It works in data sets for other diseases. All the statisticians we've talked to agree that the results look solid. After slicing and dicing the simulation data, we even have some intuition for why it works (and when it doesn't).
And like I said, now we're preparing to publish it in the next few months. As far as we're aware (and we've done a lot of searching), there's no published method for doing what ours does: taking a bunch of small sample timelines from individual patients and assembling them into one big timeline, so you can analyze your whole data set on one big timeline of disease progression.
It was only her machine. It worked on all others.
I looked at the file, and it was a tad smaller than on the other machines.
I disabled her anti-virus, it worked !
Realizing kaspersky was taking a bite at the report, I created a self signed tls certificate so the anti virus could read it.
Worked.
I used to work full time as one of a two-person IT department at a consulting company that worked with construction.
The owner of the company wrote backend business logic code nearly 30 years ago, in dBASE 5 for DOS, and switching to anything else was a non-starter. This code base was his baby, and he made it clear he’d sooner close up shop than rewrite it, despite the many reasons why it would have been a better idea than what followed.
When I started with the company, they were using computers running Windows XP, which shipped a 16-bit/DOS emulation layer called NTVDM, which was used to run this terrible dBASE software. This was after Windows 7 was released, and we knew that using NTVDM on our office desktops wasn’t going to cut it for much longer — only 32-bit Windows supported NTVDM, and most newer hardware was only getting drivers for the 64-bit variant of Windows.
Of course we tried DOSBox first, and it almost worked perfectly, but we were bottlenecked by a very slow network write speed even on a fast Core i7 at the time. This wasn’t going to cut it, either.
At the time I had been first seriously getting into virtualization, had gotten pretty good at PowerShell, and was intimately familiar with Windows’s RemoteApp functionality from a prior project. This was software that would serve any running Windows GUI application to a remote thin client. And when I tried it, an NTVDM-powered COMMAND.COM window worked just fine as a RemoteApp! Things were getting interesting.
I ended up using and using PowerShell + DOS Batch files to glue together a handful of tools to ultimately build extremely stripped down and locked down 32-bit Windows Embedded VM images, with just enough components to run these DOS applications and serve them over RemoteApp sessions. I basically built a fucking CI/CD system for remotely running DOS applications on thin clients.
The best part? Now this stupid DOS software that we couldn’t get rid of was even usable on Linux, Macs, and mobile devices. Despite it being an awful idea, we really gave that horrifying piece of software a new lease on life!
I no longer work there full time, but I still consult to them occasionally, and to this day they use this setup, and it still works super well!
It’s horribly insecure (though I think we managed to segregate that Windows VM and its access very well), and probably violates numerous EULAs, but it was just about the only way to keep that specific application in service…
(By the way, I’m looking for full time work at the moment if anyone needs a devops/automation engineer that can wrangle cursed environments… My email is hn(at)ibeep(dot)com.)
So I wrote a cron job that, at 1905, started a script which sat in a loop, dropping the connection, dialing again, and then `sleep` for 29m, until the time got to 0655.
Very stupid, but it worked, I guess.
It was the handover point for the delivery of natural gas for a large field and a quarter of the country’s energy supply.
The engneers wired an extra serial port onto the lead and plugged it into a printer we had going spare.
Later I replaced F:\login.exe with my own version written in Turbo Pascal to get the edge over a nemesis or two. :-D
Whenever a new device was connected, the people who ran the ethernet for us were nice enough to connect patch cables to the building switches. The on-site techs would go setup whatever was connecting and we'd go hunting through disabled ports for one that came up with the matching MAC. This could take up to 30 minutes depending on the size of the switch.
One day I had enough time to scrape together some VBScript in an Excel document we used as our day-to-day documentation of our management IPs. It would snag the list of disabled interfaces from your clipboard, run a simple regex, generate a command to select all the interfaces, and shove it back into your clipboard.
It was disgusting, but it also changed 30 minutes of mind-numbing work with the on-site techs sitting on their hands into around 5. It stuck around for about 3 years.
https://github.com/MarginaliaSearch/MarginaliaSearch/blob/ma...
It works annoyingly well.
For whatever reason there was an issue no-one could figure out where if either of two networked computers, attached to printers, couldn't communicate (e.g. ping each other) then the printing from the local machine itself would stop working.
I wrote a small Erlang application which would monitor an Erlang pid on the other computer and restart the network interface in case it ever lost contact, which generally made things work again.
Obviously many ways to solve (such as figuring out why it behaved like that in the first place!!) but I was learning Erlang at the time and it seemed a neat way to do it.
I was writing the motor controller code for a new submersible robot my PhD lab was building. We had bought one of the very first compact PCI boards on the market, and it was so new we couldn't find any cPCI motor controller cards, so we bought a different format card and a motherboard that converted between compact PCI bus signals and the signals on the controller boards. The controller boards themselves were based around the LM629, an old but widely used motor controller chip.
To interface with the LM629 you have to write to 8-bit registers that are mapped to memory addresses and then read back the result. The 8-bit part is important, because some of the registers are read or write only, and reading or writing to a register that cannot be read from or written to throws the chip into an error state.
LM629s are dead simple, but my code didn't work. It. Did. Not. Work. The chip kept erroring out. I had no idea why. It's almost trivially easy to issue 8-bit reads and writes to specific memory addresses in C. I had been coding in C since I was fifteen years old. I banged my head against it for two weeks.
Eventually we packed up the entire thing in a shipping crate and flew to Minneapolis, the site of the company that made the cards. They looked at my code. They thought it was fine.
After three days the CEO had pity on us poor grad students and detailed his highly paid digital logic analyst to us for an hour. He carted in a crate of electronics that were probably worth about a million dollars. Hooked everything up. Ran my code.
"You're issuing a sixteen-bit read, which is reading both the correct read-only register and the next adjacent register, which is write-only", he said.
Is showed him in my code where the read in question was very clearly a CHAR. 8 bits.
"I dunno," he said - "I can only say what the digital logic analyzer shows, which is that you're issuing a sixteen bit read."
Eventually, we found it. The Intel bridge chip that did the bus conversion had a known bug, which was clearly documented in an 8-point footnote on page 79 of the manual: 8 bit reads were translated to 16 bit reads on the cPCI bus, and then the 8 most significant units were thrown away.
In other words, a hardware bug. One that would only manifest in these very specific circumstances. We fixed it by taking a razor knife to the bus address lines and shifting them to the right by one, and then taking the least significant line and mapping it all the way over to the left, so that even and odd addresses resolved to completely different memory banks. Thus, reads to odd addresses resolved to addresses way outside those the chip was mapped to, and it never saw them. Adjusted the code to the (new) correct address range. Worked like a charm.
But I feel bad for the next grad student who had to work on that robot. "You are not expected to understand this."
The edtech company I worked for was "web first" meaning students consumed the content from a laptop or tablet instead of reading a book. It made sense because the science curriculum for example came with 40+ various simulations that helped explain the material. A large metropolitan city was voting on new curriculum and we were in the running for being selected but their one gripe was that they needed N many books in a classroom. Say for a class of 30 they wanted to have 5 books on backup just in case and for the teachers that always like a hardcopy and don't want to read from a device.
The application was all Angular 1.x based that read content from a CMS and we could update it in realtime whenever edits needed to be made. So we set off to find a solution to make some books. The design team started from scratch going page by page seeing how long it would take to make a whole book in InDesign but the concept of multiple editing doesn't really exist well in this software. Meanwhile, my team was brainstorming a code pipeline solution to auto-generate the book directly from the code that was already written for the web app.
We made a route in the Angular app for the whole entire "book" that was a stupid simple for loop to fetch each chapter and each lesson in that chapter that was rendered out on a stupidly long page. That part was more less straightforward but then came the hard part of trying to style that content for print. We came across Prince XML which fun fact was created by one of the inventors of CSS. We snagged a license and added some print target custom CSS that did things like "add blank page for padding because we want new chapter to start on the left side of the open book". But then came the devops portion that really messed with my head.
We needed a headless browser to render out all of this and then we needed the source with all the images, etc to be downloaded into a folder and then passed to Prince XML for rendering. Luckily we had a ECS pipeline so I tried to get it working in a container. I came up with a hack to wait for the end of the rendering for loop for the chapters/lessons to print something to console and then that was the "hook" for saving the page content to the folder. But then came the mother of all "scratching my head" moments when Chromedriver started randomly failing for no reason. It worked when we did a lesson. It worked when we did a chapter. But it started throwing up a non-descript error when I did the whole book. Selenium uses Chromedriver and Chromedriver is direct from Google and Chromium repo. This meant diving into that C++ code in order to trace it down when I finally found the stack trace. Well yeehaw I found an overflow error in the transport protocol that happens from Chrome devtools as it talks to the "tab/window" it's reading from. I didn't have the time to get to the bottom of the true bug so I just cranked the buffer up to like 2 GB and recompiled Chromium with the help of my favorite coworker and BOOM it worked.
But scaling this thing up was now a nightmare because we had a Java Dropwizard application reading a SQS queue that then kicked off the Selenium headless browser (with the patched Chromedriver code) which downloaded the page but now the server needed a whopping 2 GB per book which made the Dropwizard application a nightmare to memory manage and I had to do some suuuuper basic multiplication for the memory so that I could parallelize the pipeline.
I was the sole engineer for this entire rendering application and the rest of the team assisted on the CSS and styling and content edits for each and every "book". At the end of the day, I calculated that I saved roughly 82,000 hours of work because that was the current pace of how fast they could make a single chapter in a book multiplied by all the chapters and lessons for all the different states because Florida is fucked and didn't want to include certain lines about evolution, etc and so a single book for a single grade but for N many states that all have different "editions".
82,000 hours of work is 3,416.6667 days of monotonous, grueling, manual, repetitive design labor. Shit was nasty but it was so fucking awesome.
Shoutout to John Chen
But for real I'd get fired if I said...
They had lost some file, of course, and needed to pull it from yesterday's backup. "Ok, sure I'll just restore it for you." Nope, backups failed the previous night. I don't know exactly what it was, but it was a huge deal and I was tasked with figuring it out, that was now my job.
The first thing I did was look at the log history. The only pattern I saw was that backups never failed on Monday nights. Huh.
I have no idea what that could mean, so I move on and write a script to ping the NAS from their server using task scheduler every 5 minutes and write a failure to a log. Maybe it's just offline, I have no idea what the cause of the failure is at this point.
A couple weeks later, the backup fails and I check the log. Sure enough, the NAS dropped off the network overnight, and came back online in the morning. So I call my contact (he was their CAD guy, technical enough to be able to help me check things out) and ask if anything happened overnight, power outage, anything. He isn't aware of anything. The NAS is online, uptime is good, hasn't been power cycled in months.
I have him look at it and there's a MAC address sticker on it so I'm able to trace it back to the switch. Check the switch, sure enough, disconnected during the time shown by my ping log. I have him plug it into a different port, replace both the patch cable and cable connected to the NAS, and disable the previous port. And wait.
The next time it happens, I was able to talk them into buying a new NAS thinking it has to be the NIC on it. It's about 3 years old so it's an easy sell, should probably replace it anyway if it's that important. We ship it out, they pop it in, we transfer the data, and we wait.
Happens again.
So now it this point we are talking about replacing switches, routers, and firewalls. I get 3 different vendors involved. No one is seeing anything out of order, and all their hardware is out of warranty.
At this point, the network has been wiresharked to death and everything looks great, absolute dead end. Customer is not happy about having to potentially spend $10k on network gear we can't even prove is bad so I get routed the on call for this backup failure.
It happens and I drive over there at 3AM.
I arrive, find who is in charge, and they direct me to the network closet. I find out that the NAS is not here. I ask about the NAS. He says oh yeah, that's in a different room.
He beings me there and it's in a closet down a flight stairs from the owner's office with the network cable running under the door. The cable is laying on the floor in the office, end completely stripped.
Turns out, that was on the route to some supply closet that only third shift used. Third shift was 4 10's, Tues-Sat. They were tripping over the cable and the owner was plugging it in when he arrived in the morning. He was out of the loop for the whole thing so had no idea what was going on. He said it didn't seem to affect his "internet" so he never mentioned it.
So what was the hack? I threw a rug on the cable and drove home.
(Yes we did move it later, but there were some network changes out of my control that needed to happen first.)
So what happened is: the customer would order their phone subscription on the front end, that would create a job file that would be sent to a scheduler that managed 10 Windows VMs that used a Ruby Watir script to direct IE6 to fill in the data from the job file on the old decrepit website.
It's the most horrific hack that I ever touched (I forgot exactly, but I had to make some adjustments to the system), but it worked perfectly for a couple of years until those providers finally updated their websites.
Me: oh yeah, I just hacked it together and never finished cleaning it up to make sense and be adaptable. Here's enough context to hack it again quickly, or if you have time here's my old notes on how to do it right.
.
I.e., stupid things that seem to work for now, tend to turn into technical debt later.