Signed, Burnt out software developer
It allows you to encrypt anything and can be decrypted by anyone after a certain point in the future has passed. It uses the new NMA announcements (signed timestamps) from the Galileo GPS satellites to “prove” the time has passed and generate the decryption key.
It was more than a few days, in fact was quite the work in progress. When I started management had no way of visualizing our call data on an as needed basis. Prior to the dashboard if they need some data for fire calls they went to one person, asked for the data and waited until that person got around to doing it and then delivered in an Excel spreadsheet. Same with EMS data through a different person.
I started to work on v2 but it kind of died on the vine as I knew I would never get support from I.T. [2]
There were also some scripts that I created to gather data, compile it and send it a daily email to various people within the department. But once I left the department I lost access to the department network. [3]
[1]: https://www.gigofone.com/projects/dataviz
I just found the DOS version hosted at the Internet Archive[1]
Here's a later Javscript version[2]
It was good for a while, but eventually it wasn't optimistic enough.
1 - https://archive.org/details/DPRICE20_ZIP
2 - https://web.archive.org/web/20040302060522/http://www.tsrcom...
However, the coolest thing would have to be the project I'm in the middle of now, but it requires some explanation. At Hackerspace.gent, the center of the lounge area is Bloembak[3], a 1x1m table with a 32x32 pixel display covering its surface. During a discussion at a local bar with its creator, I decided it would be absolutely brilliant to be able to run shaders on it. So in my spare time over the last week, I wrote an interpreter for SPIR-V shaders to check my understanding, and then over the course of about 2 days, I rewrote the entire thing to target LLVM. While it's not finished (I only implement ~2/3 of the opcodes in SPIR-V and 1/10 of GLSL.std.450), it's already sufficient to run quite a few shaders off shadertoy at a reasonable framerate.
[1] https://github.com/thequux/xbattbar3
[2] https://github.com/thequux/wattbar
[3] Sadly, I have no direct link, but there are pictures on the creator's site here: http://bloemi.st/landing/
My goal is to create a database with seamless merge with my Merkle CRDT implementation[3]. That can be a distributed system that handles non-conflicting merges. CRDTs allow data to be merged seamlessly without overwriting eachother. The braid protocol is very interesting too. I recommend reading the shelf CRDT algorithm.[4]
I am also interested in distributed systems and programming language design. See my profile for my ideas.
[1]: HTTPS://GitHub.com/samsquire/text-diff
[2]: https://blog.jcoglan.com/2017/02/12/the-myers-diff-algorithm...
[3]: HTTPS://GitHub.com/samsquire/merkle-crdt
It's under 500 LoC, and the only deps are `terser` and `node-watch`
https://github.com/uxtely/js-utils/tree/main/static-pages-bu...
The first version of it I wrote in a couple of hours.
And then it went on to be quite popular and has lived a life of its own.
https://github.com/BaptisteV/Replaceator/tree/master/Replace...
You’ve all been there. You spend months working with a new stack or framework or environment, and the deadline looms. The DEAD LINE. And you know something isn’t quite right.
And it comes to you. By virtue of emergent coder phenomenon, you KNOW you can rewrite your months and months of code, and do it RIGHT, and if you don’t it will be mediocre, and if you fail (it still has to pass testing) it will be your disaster.
DO IT!
It can be done!
You can succeed!
There is a programmer deity out there which blesses drastic, self determined rewrites.
It can all be done in a few days. And it can only be done in a mad dash of determination.
Of course, as the users came, scope grew, and now it supports a lot more than just email alerts when static websites go down - but I was amazed at how quickly you can spin up a business these days (and with supabase/planetscale its probably even faster in 2022).
That would have been possible with a feature like percolation[1] in Elasticsearch, but I felt that the overhead of maintaining state (percolator queries) by using a network service would be excessive and that building an in-process alternative would be feasible.
The result is hashedixsearch[2], a pure-Python search engine library with support for stemming, synonyms and a few other features[3] to support the use-case.
It builds upon inverted index support provided by the hashedindex[4] library.
[1] - https://www.elastic.co/guide/en/elasticsearch/reference/curr...
[2] - https://pypi.org/project/hashedixsearch/
[3] - https://github.com/openculinary/hashedixsearch/blob/6980ee63...
The department was paying contracting companies to inventory assets.
At the end of the inventory, the contracting company would deliver the inventory on paper which would be filed in a managers office for a few years.
The inventory database was fairly simple just a few tables when the application was first developed.
Eventually a different manager decided the electronic data entry would be useful.
We added a few forms for data entry using a tablet.
With the data in a database, the person doing the inventory could pull up the inventory from the previous years and check off the equipment that was still at the location.
There was a form for entering new equipment and a form for transferring equipment between locations.
The manager who helped me got an award for reducing the time required for the inventory from months down to a few weeks.
I initially started the project just to maintain my development skills.
The inventory database remained in use for several years with very little maintenance except for writing a few reports and showing a few people how to use the reports.
Instead of a not very useful pile of paper, we had a database that contained the inventory.
When a manager wanted to know how many laptops or desktops needed to be refreshed each year, he could run a report listing the devices with specific attributes like how much memory or which CPU type and use the report to budget for the next years equipment refresh.
This was over ten years ago, and the sites were managed by different IT organizations, so at the time, it was just a simple task to save time for people on a different team.
K8s debugging thing which I affectionately call kubebkwd to contrast with kubefwd. Covers the main use case of telepresence that I was interested in - i.e. proxying to a local host instance from a local k8s cluster like docker-desktop. Does it without any special localhost network configuration other than what is set up with docker-desktop or other local k8s environments by default and only makes changes in the cluster so hopefully will be more robust to host OS configuration than telepresence, which I've had problems with. But be warned it's quite hacky, barely tested and doesn't clean up after itself, inconsistent style, here be dragons, probable dunning-kruger effect driven development, etc.
Even though it was fairly rudimentary and I never formally tested anything, it seemed to work well enough.
Eventually, I rewrote it using Svelte which made an already simple app trivial to reproduce.