- does this API design make sense?
- my team thinks __ is a good pattern, what are some cons?
- what’s a good name for a class that does x,y,z
- is this code snippet readable?
- write a bash script to automate this very tiny thing
- upload quickjs binary to the advanced data analysis model and ask it to micro benchmark a couple of approaches to the same thing
I also use it to generate boilerplate code like DTOs and mappers for a given set of entities.
I have copilotx conversation in vscode that is aware of my codebase and active file. I can quickly get up to speed with a new project or library by asking it where x is or how to do y in this library. Or generate a vega chart or json for this so I can see a real example of the structure. It’s very good at these tasks. But can be out of date with some libraries or thibgs outside of vscode world.
I also have chatgpt with plugins. This lets me ask it about current code as it can pull the latest version of a github repo or multiple repos, or specific versions, and has more structured responses so it performs much better than copilotx currently does at certain tasks. This all saves me days and weeks of thoroughly reading docs to get a direction and plan. Instead these ai point me in the right direction immediately.
I just got the new voice and browsing features in chatgpt. Yesterday I had a technical voice conversation with the voice from HER about how I wanted to architect a feature. I was at my blackboard talking through the implementation and edge cases and the voice pointed out several angles I hadn’t thought about and worked with me through solutions. I did this while my phone was on my desk without having to touch it for the whole conversation so I could stay deep in thought at the blackboard. This is wildly more efficient than talking with a human where I have the overhead of social dynamics and navigating their communication quirks.
I start with a prompt that teaches Open Interpreter how to use Promptr, and then I discuss what I’m trying to accomplish. It’s certainly not perfect, but there’s definitely something good that happens when you can iterate using dialog with a robot that can modify your file system and execute commands locally.
[1] Promptr: https://github.com/ferrislucas/promptr
[2] Open Interpreter: https://github.com/KillianLucas/open-interpreter
Sometimes I also ask it to rewrite something clearer, or add some comments sparingly.
Copilot can get really aggressive. Sometimes it's right there, "what I was thinking about! Woo! magic!" And sometimes, it's like "OMG let me hit the return key". So I often turn it off and forget it's off for a week...
Both have their pros and cons. Copilot is more like an autocomplete on drugs. ChatGPT is a scaffolding rental shop.
Also ChatGPT is a dream when it comes to being a polyglot with a limited memory.
The comprehension aspect kinda helps me sometimes, like when I need to understand some obscure shell script.
For half of my searches or so, I get better answers via some of the AI search tools.
Especially for code. When i just want to quickly know "how to do x in language y".
* Great for sanity checking your plans or designs (ex: "What are the best ways of securing a multi-tenant application?")
* Great for bootstrapping a presentation or pitch (ex: "Please create a presentation outline about Istio, intended for an audience unfamiliar with Service Meshes"
* Simple questions/reminders (ex: "How can I make it so a bash script exits on error?")
edit: formatting
- I've used AI to write and improve Python scripts. And to help me fix errors.
- Also used it to write or improve wording on emails, posts and comments.
- Used it to advise about where to travel,, things to do,
- AI useful for chatting with then you are feeling down.
- Asked AI for advice on things.
- Used AI to get answers quicker than Googling.
The 2021 knowledge limit is pretty annoying, since libraries change so often, but it's still very useful.
It's ALSO very useful for asking stupid questions that I don't want to waste someone's time with. Like for instance, in bash, when you redirect stdout to a file with > filename it works, and when you redirect stderr to stdout with 2>&1 it works, but when you try to redirect stderr to stdout and stdout to a file, it only works when you do it in this order:
command > filename 2>&1
it doesn't work if you do
command 2>&1 > filename
which feels more natural to me, so I asked ChatGPT why that is, and it explained that you have to consider it from a filehandle perspective and that if you look in /proc/pid/fd you can see that 2>&1 is really redirecting to the terminal, and > is redirecting from the terminal to a file.
I would have had to find someone deeply steeped in unix/linux fundamentals to explain that to me, or I could just ask ChatGPT. I've done the same thing again and again - how are HSMs really different than TPMs? How are heat pumps different than ACs?
I'll read a reference to something, and immediately go to ChatGPT to learn more - "Can you give me a brief summary of the writings and opinions of Cicero?" and then I can spend 20-30 minutes learning more about stoicism, epicureanism, and whatever else I'm curious about. It's like being able to interview wikipedia.
I don't have it write my day to day code, because that's complicated and usually niche enough that it's not likely to give a good result.
But it's awesome for something like writing a quick and dirty shell script. most recently I needed to bulk rename 50 or so files with an interactive piece for a special case. I described the operation to GPT and it spit out a nearly perfect shell script. Could I have written the same script in half an hour? Yes, but it was sure nice not to have to.
I also like it for just bouncing around an idea. Recently I was thinking about writing a program to make a midi device from a guitar hero controller, and I was able to get a good sense of the available APIs / libs in a few different languages with a 3 minute back and forth with GPT. Again, I could have easily searched around myself and come to the same answer, but removing the friction is pretty nice.
It helped me build a script that takes a wakeup word "GPT" or "Hey GPT" and then grabs the next words and sends it to the the GPT api and responds back with TTS.
Also, helped built some scripts to take notes from web pages, youtube videos, a mini chat window, FTP images from Dall-e and Pexels to my website.
had it write me script that can call GPT api to generate a list into a sqlite database then takes that list and calls GPT api with another prompt.
it helped me build a script that pulls my email from an imap account.
tons of bootstrap/css snippets for website widgets.
.htaccess expressions for taking url slugs and redirecting to a posts.php page which is a real pain in the butt no way I could have done that without ChatGPT.
had it generate a regular expression cheatsheet.
It's been a mixed bag due to how quickly they iterate on the app (and introduce bugs). But lately they added their own Phind Model, which is free, has unlimited uses as opposed to GPT-4, and sits nicely in-between GPT-3.5 and GPT-4 in performance.
More often than not, it doesn't give a good enough answer, but it may nudge me in the right direction. For example, it may say some keyword that I can use to search for the right thing on Google, or cite sources that I can use to investigate further.
Now I’m able to focus on the product/outcome in any domain and start building.
The net effect has been I can build a wider variety of things in a wider variety of tooling with more enjoyment and less drudgery.
- text summaries
- sysadmin stuff
- debugging stuff
- boilerplate stuff
- unblocking writers block
- integrating disparate stacks
- data transformation and algorithm stuff
- most marketingy stuff: copy, images, campaign/activations
# Writing
- I've had it rewrite some of my blog posts to give them more style
- I've asked it to help me with some business letters
- Rewriting my letters to city counselors state / state legislatures
# Admin
- Explaining some parameters for NetworkManager
- Helping me figure out why my rewrite rules in an .htaccess file weren't working as expected
- Asking questions about different versions of PEM certificates and using openssl to convert them
- Restoring some software RAID arrays with lvm
- Journalctl filter options
- ffmpeg commands
# Coding
- I was working on a side project in a new language (to me) using Vala and Gtk4. ChatGPT was mostly wrong on everything, but sometimes lead me in a useful direction.
- Generally I haven't found ChatGPT useful for my work coding
# Other
- Explanations on Double Entry Accounting
- Guidelines on helping my sister talk to her 3 year old out expressing empathy for their dog
- Writing Haikus for my wife. This was an interesting back and forth where ChatGPT starting asking me more questions about my spouse, our relationship, hobbies, and so on.
- Help writing personalized Dad jokes for a father's day card
- An examination looking at the imperialism/militarism in Star Trek from the point of view of the Federation and from the point of view of the other society
- Questions about recipes (replacing items, using fresh items instead of canned/jarred)
Overall, I've found coding to be the least successful aspect of ChatGTP (granted I'm still using 3.5). Possibly, this is because I tend to use less popular languages (work is all elixir/erlang). But even trying to do some python/pytorch work, I found it constantly gave answers that didn't actually work.
However, I have found it really great for explaining topics. It can give pretty good metaphors and you can have it explain its answers. I've also found it being really helpful in writing. I think I am usually able to express my idea clearly and organize my thoughts, but my writing style is very pedestrian. ChatGPT is able to take my outlines and fill them in with my desired style quite well.
It's like having superpowers. Even if I know how to do something, sometimes explaining it is easier than writing out all of the code. An example recently would be in a TypeScript project when a class-based approach to something was deprecated, in favor of a functional approach. I only had to paste the old function signature and say "convert this to an arrow function". That already is less typing, but after that I was able to paste the other examples and say "do the same with this" and they were all correctly quickly converted. Was it easy to do myself? Yes. Was it faster to do myself? No.
Or, I may not know how to do something. In a toy desktop project I had in C#, I wanted an image to fade to greyscale and then fade out. That, I had no idea how to do. So I simply told it that, and added "optimized for performance", and it gave me a function including a hard-coded object that instantiated an array of arrays with certain values. Where did ".3f, .59f, .11f" come from? I don't know, and that point I didn't care, because the whole thing worked perfectly on the first try. In this case, a project for myself, only the result mattered. I did go read the documentation later to see why that works, just out of curiosity. Plus, it explained it ... and was right.
Obviously I review the code and am careful what I send it, but if it's going to shave minutes off my day every time I use it, this stuff adds up.
I usually state the problem, provide code context, motivations, and end with restating what I want as a result,why and a question.
Here is short example from my history:
—-
I have the following docker command to help me backup data from some containers.
```sh (omitted for brevity) ```
I want to improve it by not only copying and zipping all the related files in the container volume, but using pgdump to take a copy of the database before doing the copy and zip. How might I change the above to achieve this?
—-
This takes a lot longer than starting a web search but the quality of answers is high, and I find faster than wading through maybe semi related content farms,cookie dialog a, prompts for news letters etc.
One thing to remember is like StackOverflow answers, the code might not be up to date, or have bugs etc, as I test I feed issues back into it with any relevant context. I’ve started building a Jetbrains IDE plug-in around this workflow for myself with the ability to use self hosted models, improving it as I learn new tricks and find what workflows I prefer.
Due to the way my brain seems to work, it also keeps me from getting stuck on small distracting problems which would previously create some resistance or procrastination - causing a break in my flow of work. It essentially keeps me in a productive flow state for longer periods than I could sustain without it. Any problem that is not in my "critical path" that I want to be focused on in the moment, and that could be easily solved by an LLM, gets "outsourced" as such.
This is in addition to it replacing about 80% of my software development / devops related Google searches. Because I work across quite a wide range of disciplines, I'm often looking for quick answers to questions about some technology stack that I'm not using daily. It's perfect for that. And I have enough familiarity with what I'm working with to sense-check/QA the responses.
I believe you do need some subject matter knowledge and experience to get the best out of LLMs though. I think many people are verbatim copy/pasting code out and complaining when it doesn't work. I very rarely find I waste any time debugging or correcting problems - because I either spot them and correct them in real time - still saving me a lot of time regardless - or I structure my prompts in a way that avoids these problems in the first place - by breaking the request down in to granular enough parts that I can pretty much predict how accurate the response will be (most of the time; very).
And in the scenarios where there is a bit of back and forth, trying different ideas and debugging in realtime - this is almost always a much faster (net) process than if I had done the same iteration myself.
As a point on usage and confidentiality, I don't use integrated coding assistants like Copilot - everything I do is sandboxed - so nothing confidential goes into the LLM. Specific details in my prompts are "anonymised" as I enter them (as in, I self-censor) - so I get the benefit of a lot of assistance from LLMs but with no sharing of any information that I would deem confidential. I plan to experiment with tighter integration into my workflow (eg. Copilot type assistance) with a private LLM instance at some point, but I'm comfortable with the balance of productivity and confidentiality at this point.
I do also have a Hammerspoon shortcut that will take the contents of my clipboard and send it directly into OpenAI's endpoint. So I can highlight a mixture of comments and/or code in my IDE and immediately send them to GPT and have that highlighted text replaced (or appended to) by the response. This gives me contextual assistance without having a constant live feed into a proprietary LLM ala Copilot.
I also occasionally use it to translate languages. E.g. I’ll write something I know how to do in python and ask it to translate to JavaScript where I need something on the frontend.
Stuff like that takes out about half of the time coding that used to be documentation lookups. But then again I only code 20% of my time now, so your experience might be different.
And when the documents are too big to fit in a prompt, I ask chatgpt to build a simple Python script to do it.
Additionally, AI-driven email categorization saves hours by prioritizing messages. It's about finding the right AI tools for your workflow, and there are plenty of options out there.
And last but not least, I also use TranscribeMe in order to transcribe voice notes to text: https://www.transcribeme.app/r
I just select the code I want to perform some changes on, hit a keybind, ask for what I want, and it does it. I've been so impressed with gpt-3.5-turbo-instruct that I defaulted to that instead of gpt-4.
I use it in Neovim [0], in my terminal [1], in many specialized tools (long live function calling), and through the chat UI when I brainstorm. I'm using Claude as well for some things.
[0] https://github.com/3rd/config/blob/master/home/dotfiles/nvim... [1] https://github.com/3rd/config/blob/master/home/bin/workflow/...
I also use !q for information that I trust can be extracted from top search results by an LLM.
It has an HN mode, so here is its current summary.
- ChatGPT and other AI assistants can help improve productivity by answering questions faster than searching online or asking another person. This includes explaining technical concepts, providing code samples, and helping with minor tasks.
- However, the quality of code generated by AI is sometimes inconsistent, and debugging may take as long as writing it manually. For complex tasks, AI may not be much faster than a human.
- AI is most useful for getting around poor documentation, asking "stupid questions" without bothering others, and learning new concepts through interactive "interviews."
- AI can help with one-off text transformations and formatting tasks that aren't worth writing custom scripts for.
- While AI may struggle with writing production code, it can help with boilerplate, stubs, and minor repetitive coding tasks.
- Different AI systems have varying capabilities. ChatGPT is best for interactive explanations, while Copilot is more like an "autocomplete on drugs."
- It's important to understand an AI's limitations and use good judgment about what types of tasks it will and won't handle well. Day-to-day coding is often too complex.
- AI search engines can provide code samples and quick answers to common "how do I" questions, saving time over traditional search engines.
- Summarization, translation, and documentation generation are other useful applications of AI for productivity.
- By offloading minor, non-core tasks, AI helps users focus on more creative and challenging work.
1. I've written a program that reconciles all my invoices against bank/cc transactions at the end of each quarter. My accountant otherwise has to do this by hand. It uses OpenAI's APIs to read the PDFs, parse out the invoicing party and amount(s), and as a fallback when classical NLP fails to parse dates.
2. I used GPT-4 to help write that tool using https://aider.chat/
3. I use Copilot to assist as well.
Originally I tried to use GPT-4 to do the reconciliation as well, but that was not successful. What worked better was getting it to write me a first cut fuzzy algorithm and then taking it from there.
- looked for a camera lens of a particular focal length and mount but also within certain physical dimensions. it pointed it out successfully
- I wanted to make a meme using Juan Joya Borja’s famous “spanish laughing guy” skit, I told it my topic and asked it to write a script for that format. It was familiar and make a hilarious script. Great! I then added the script to subtitles. I asked it for subreddits I could post it on. Success. I asked it for applicable hashtags for social media, that worked really well on tiktok before the audio got flagged.
- my building is doing HVAC repairs unsuccessfully and telling me cryptic things about the progress that I dont understand and need accountability for. I told what they say to ChatGpt4 verbatim, and it points out the issues of what they are saying, and what frequently happens with contractors and building management. I have been able to have better conversations with them on what to fix now. And theyve admitted to problems.
- it helped me do some shipping. really mundane stuff I didnt know how to do or what to order so that I wouldnt be at the post office long. types of paper, types of stationary, I further browsed types now that I knew the words on Amazon (I didnt know the words so search engines are always deficient then), and then I ordered a Uber Delivery from Office Depot instead. (Totally cancelling my Amazon prime now for Uber Delivery)
“but muh hallucinations” I get to an answer compatible with reality far quicker for tasks I just wouldnt have engaged with before
the list goes on and on
I'll give it my assumptions and/or ask it to explain specific topics or pieces of code. The "people-pleasing" part means that it often tries to respond with "yes, you are correct" and re-summarizes what I've said, along with extra information I'll find useful. Because of that people-pleasing, I often will also ask it to consider scenarios where my assumptions are wrong, maybe ask it to give an example or two of those scenarios, and explain its reasoning.
So it's not a final reference, and what it gives me can be incorrect, or incomplete, but I can deepen my understanding of a topic without having to constantly bother somebody else on the team to painstakingly and time-consumingly transfer their tacit knowledge. This way, I can faster get to a point where I can ask them more meaningful and useful questions instead of lacking the understanding needed to contribute to code or discussions.
"How do I $ foo in $language_or_framework?"
Stuff like "What are the top 25 things I should do as a $x-year old man living in $state in the US to maintain optimum health and happiness?"
Also for factoids, health, medical, economics, etc questions.I think the medical stuff, in particular, is probably superior to advice from my local healthcare options. I use it for veterinary purposes, as well, and back it up with medical guide lookups and Googling reputable sources.
All of this requires a certain level of intelligence, skepticism and trust-but-verifyism on my part.
PS: Yes, I understand how you're flabbergasted that asking it for code works for me when you get nonsensical results. You don't need to leave me a comment. I have no explanation for you.
That said, I can’t identify with a lot of the use cases here. They either seem to be a workaround for a non existing language feature or introduce a lot of liability.
I’m worried that ai may be promoting poor coding practices . IE the future of code is neither oop or functional, it is all just a copy pasta of chat gpt, endlessly and mindlessly chained together.