The thing that is mind blowing to me is that I can talk with GPT about basically any topic and it can help get me in the right direction, I can talk with it to get clarity on anything that’s confusing, and it’s like a way to collaborate on thought without a human. That’s mindblowing to me!
Only to realize in the process that all I really needed was to modify the shader program's depth checking behavior. Even just disabling it altogether looks fine and makes everything way faster. Never in my wildest imagination a year ago would I even dream of implementing WBOIT in one night and then just leaving it in a branch for a one line change.
In the process I realized a little trick on my own, using what I learned, that modifying the depth values in the fragment shader could allow selective alpha-blending/occlusion. Basically the idea was to put stuff I wanted to blend order-independently all at an equal z position and then set the depth function to "less than or equal" instead of "less than". I used the occluding geometry's depth buffer as a source to modify the translucent stuff's depth buffer in the fragment shader (with a slight offset to get it behind the occluding stuff).
There's absolutely no way I'd have had the guts to try all that in one night without the robot tools.
None of the scripts it produced are incredible or whatever (the tasks being more menial than lohic-based), but considering the intricacies of bash and how often I just faceplant into them as an experienced sysadmin, if feel as though the ability to just say "write a script that compares a list of agents against every manager and outputs the ones not found anywhere" and have it output basically a working script first try is impressive.
// FIXME: Implement this class paying attention to the unit tests
at which point it will go look up the unit test class, read it, understand what the code needs to do and proceed to edit the necessary algorithms into the code. The tool is a mix of response parsing/execution and prompt building, so it can do builds/tests/fix cycles. At the end you get a git branch with the work it's done.By this point I'm kinda developing an intuition for what GPT4 can and can't do, but whilst pushing against what it can do I still often find myself being impressed. Maybe not "mind blown" anymore because it's amazing how quickly you can get used to this stuff, but still. Not only does it do an excellent job of figuring out what to do when the instructions are clear enough, but it also has commands for adding library dependencies and will use relevant open source libs to make its job easier.
The hard part about using this sort of tool is that you can quickly become bottlenecked on figuring out what it is you actually want. It's easy to run ahead of yourself and end up with a mess. This sort of very rapid "painting with code" feel is a bit new; probably the slowness of typing allows your subconscious to think ahead a bit when doing programming normally. Also, I'm still teaching it how to explore the codebase efficiently so things that require a lot of context aren't quite there yet. I've got a whole plan for how to make that work better though.
I expect at some point soon some big company or some $100M "seed round" startup will come along and do this better, but I didn't want to wait.
Did the reverse with some regulatory documents too, asking it to summarise in vernacular then asking specific questions to better wrap my head around the docs. I wouldn't trust it fully for this of course, more like a supplementary learning aid.
I keep hitting limits with context size though. Eg. When coding it becomes cumbersome once your project hits a certain complexity. You have to carefully construct prompts to get useful outputs. Being able to add the entire source code into a prompt to have more contextual responses would really increase utility.
Had it write non blocking micro-controller code for a project (about 75% of the output was put into production) it helped making variable names and general structure. The first output was bad because it blocked the button read while "breathing" and led but then I told it: "non blocking code please".
As an example: I recently showed my mother GPT-4. I asked her what she wanted to see, and having little knowledge of the tool's capabilities, she asked me to surprise her. Apparently I had done this before (with GPT-3), as the poem I generated about Petoskey stones was met with little fanfair: "I've seen it generate poems... what else can it do?"
At that point I showed her a Python file and asked GPT to translate it to Typescript. Needless to say she's not a poet or a programmer, but the fact it could do something like that, entirely outside of her domain, was mind-blowing.
For me, I'm surprised how little I use Google...
Everyone could be producing way more than they currently are.
Everyone could be generating 1 app every 3 hours.
Everyone could be writing entire novels.
Everyone could be writing meaningful Wikipedia articles.
DIY guides should be 100x larger .
Both ChatGPT and Phind insisted on giving me example SMT problems and then incorrect solutions to them. The HE answer was so vague as to be useless, and I don’t actually know enough about SDN to verify the last one but it seemed useful.
Overall it didn’t give me a lot of confidence in the two systems.
Maybe that’s an easy thing to know, but I expected to not get a good answer.