I'd be particularly interested to hear how you use GPT to write or correct code beyond Copilot or asking ChatGPT about code in chat format.
But I'm also interested in hearing about useful prompts that you use to increase your productivity.
I also use it for creative tasks - for example I asked it for pros and cons of my cover letter and iterated to improve it. I also used it to come up with ideas for lesson plans, draft emails, and overcome writer's block.
GPT has drastically lowered the emotional-resistance barrier to doing creative tasks and improved the quality of my output by giving me creative ideas to work with.
I don't trust it with my data, and won't rely on such tools until I can self-host them, and they can be entirely offline. There is some progress in this space, but they're not great yet, and I don't have the resources to run them. I'm hoping that the requirements will go down, or I might just host it on a cloud provider.
The amount of people who don't think twice about sending these services all kinds of private data, even in the tech space, is concerning. Keyloggers like Grammarly are particularly insidious.
My most productive is a therapy session with ChatGPT as therapist. I told it my values, my short term goals, and some areas in my life where I'd like to have more focus and areas where I would like to spend less time.
Some days we are retrospective and some days we are planning. My therapist gets me back on track, never judges, and has lots of motivational ideas for me. All aligned with my values and goals.
Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.
• reviewing contract changes, explaining hard to parse legalese
• advice on accounting/tax when billing international clients
• visa application
• boilerplate django code
• learnt all about smtp relays, and requirements for keeping a good reputation for your IPs
• travel itinerary
• domain specific questions (which were 50/50 correct at best…)
• general troubleshooting
I’m using it as a second brain. I can quickly double check some assumptions, get a clear overview of a given topic and then direction on where I need to delve deeper.
Anyone who still thinks that this is “just a statistical model” doesn’t get it. Sure, it’s not sentient or intelligent, but it sure as hell making my life easier. I won’t be going back to the way I used to do things.
Edit: bullet formatting
For programming, all sorts of things. I use it all the time for programming languages that I'm not fluent in, like AppleScript or bash/zsh/jq. One recent example: https://til.simonwillison.net/gpt3/chatgpt-applescript
I use it as a rapid prototyping tool. I got it to build me a textarea I could paste TSV values into to preview that data as a table recently, one prompt produced exactly the prototype I wanted: https://github.com/simonw/datasette-paste-table/issues/1
I use it for brainstorming. "Give me 40 ideas for Datasette plugins involving AI" - asking for 40 ideas means that even if the first ten are generic and obvious there will be some interesting ones further down the list.
I used it to generate an OpenAPI schema when I wrote my first ChatGPT plugin, see prompt in https://simonwillison.net/2023/Mar/24/datasette-chatgpt-plug...
It's fantastic for explaining code that I don't understand: just paste it in and it will break down what it's doing, then I can ask follow up questions about specific syntax to get further deeper explanations.
Similar to that, I use it for jargon all the time. I'll even paste in a tweet and say "what did this mean by X?" and it will tell me. It's great for decoding abstracts from academic papers.
It's good for discovering command line tools - it taught me about the macOS "sips" tool a few weeks ago: https://til.simonwillison.net/macos/sips
For public APIs, I ask to make sure its aware of the api. Then I ask for endpoints. I find the endpoint I want. Then I ask it to code a request to the endpoint in language X (Ruby, Python, Elixir). It then gives me a starting point to jump off from.
Thirty seconds of prompt writing saves me about 20 minutes of getting setup. Yes, I have to edit it but generally it is pretty close.
- As a thesaurus
- What's the name of that "thing" that does "something" - kind of like fuzzy matching
- A starting point for writing particular functions. For example, I wanted a certain string-manipulation function written in C, and it gave me a decent skeleton. However they're almost always very inefficient, so I have to optimize them.
Things I've tried, that others seem to be blown away by, that I find useless:
- Rewriting emails or documentation: I see no clarity improvement from ChatGPT rewording what I say, and sometimes information is lost in the process.
- Outliner or idea prompter: I don't see an improvement over just traditional internet search and reading over various articles and books.
For me, its capabilities do not match the marketing and hype. It's basically just a slightly better search engine. All of the above use-cases can be accomplished with some Google-fu. For people who don't know any programming or about using search engine operators, I could see why they might be impressed by it.
I also asked it for vacation ideas with nice cabins and trailer hookups with outdoor activities for kids and nice cabins within 200 miles of where I live - it was almost perfect in its response.
I have trouble starting things from scratch, but once a framework exists I'm usually solid and can refine it to where I want it. For me, right now, I think that's where it shines: Giving me a solid starting place to work from. Beats the hell out of sifting through blog entries bloated with SEO filler.
Search engines with SEO are so reliant on keywords that it often feels like I'm suggesting answers rather than asking questions - it's so refreshing to be able to just ask again.
Explainer: a default, general purpose Q&A. The prompt is "Explain to me like a 3rd grader. Skip prose." I plan to expand this profile to include additional communication styles, including step-by-step explanations, elaboration, and the Socratic method.
Proofreader: I use this profile to edit, simplify, and shorten any text (including this comment). I borrowed this feature from Grammarly Go, and it works by pasting the text and clicking a button.
Developer (in development): this uses a simple editor as input, with features similar to Cody/Copilot, such as adding types, naming things, summarizing, autocomplete, auditing, explaining, fixing, refactoring, and more.
Lastly, I plan to add two more profiles that are more creative and generative: Writer and Designer. They will act as private consultants/partners and assist me in brainstorming and complementing my skills in building websites.
- Asked it to improve a HN comment I wrote.
- Asked about an idiom I couldn't remember, by saying it in other words.
- Asked it to dumb down some things about options (finance) I didn't understand.
- Asked it if I could use the eBay API to list my purchase history (you can, and it knew how).
- Asked it to generate pretty standard Terms of Service for an app I'm working on.
- Asked it to generate a moderately complex Prisma (ORM) query that I described in natural language.
- Described what I wanted Nginx to do in natural language (e.g. "the index file will be served whenever no file is found") and asked it to output a configuration file.
- Asked it what the "XDG" in "XDG_CONFIG_HOME" stood for.
Also, occasionally ask it to generate shell commands using a CLI I wrote[0].
- generate AWK to transform text
- modify python script to make it multithreaded (when this worked first go I was very mindblown)
- explain concepts that I half understand in clear language, eg explain sharpe ratio and show me the math
- “what do people usually call” type questions. Like what are common job titles for people who do X and Y?
- proof read my emails and make suggestions given a stated goal
- settle language/choice of words discussions by asking GPT to reverse pitch understanding, then choosing the one that’s most aligned with the point we’re trying to make
- generally linux-y commands and explanations “best way to remap keys in i3” or find file with content “xyz” with tool faster than find
- Web scrapping
- Web crawler
- Trying out Rust
- Trying out pytorch and training models for computer vision problems
This is all stuff that would have taken me several weekends of reading docs and geting familiar with libraries.
Now it suddenly becomes a few hours because the discovery of the right tool is semi-instant and although the code it provides is often wrong it generally puts you in the right direction.
My intuition is that it's a lot better at outputting viable Python and JS code than other languages, perhaps there is bias in how much data of other languages it has access to.
> Can you improve the following text for clarity: "When deploying into the enterprise, there is a lot of flexibility when it comes to deciding how you structure the deployment, whether you want to run supporting services on premise, and which ones (if any) you choose to deploy. The decisions depend on how you plan to use our software and your security requirements. We’ll be happy to discuss these with you at [email]."
Result:
> "When deploying our software into your enterprise, you have the flexibility to decide on the structure of the deployment, including running supporting services on premise and selecting which ones (if any) to deploy. Your decisions should be based on your intended use of our software and your security requirements. If you need help with these decisions, please feel free to contact us at [email]"
It did indeed improve clarity and readability without screwing anything up. I just want to run this thing on our whole website/repo now, and I wonder how.
I can simply feed in an en.i18n.json file, and it will generate i18n.json files for as many languages as I want. I don't use a specific prompt, but I occasionally include general information about the software in it.
Edit: I do verify the output by translating it back to English using Google translate, but it seems I need to be more careful.
- Asked which config files handled sleep mode when lid is closed and kept fixing it and asking for more possible locations until it fixed my issue (going into sleep during boot before user login if laptop is closed even with externals)
- Asked for a list of KDE config files I should track in git
- Copy pasted a list of ~/.config files and directories and asked ChatGpt for descriptions. Used those for commit messages to build up the initial dotfiles repo for KDE plasma and a whole bunch of other stuff that would have taken hours
- Asked it how to write a bunch of journalctl queries
- Queried it about some log lines I've been seeing in journalctl and had it guide me through troubleshooting using some terminal commands it came up with. Turned out to be a problem with nvidia-drm.modeset in kernel configs
- Asked it to guide me through a dozen awesomeWM customizations ranging from new code from text descriptions to edit suggestions to fix bugs in behavior I've described. Stuff like custom if/else trees handling setup specific scenarios (logic for clamshell open and closed with one or more externals connected by ID) are a breeze.
- Asked it for tips on how to use awesomeWM best and which keybindings to customize
- Code up the message passing from a firefox extension to a native Rust CLI (like the 1password extension) that uses remoc to pass through messages from all tabs to a single daemon over platform specific interprocess communication
AKA Google is fucked.
- Interactive debugging. Yesterday, for example, it helped me debug some tricky CSS issue where it gave hints and after 6 times back and forth, the solution came up. I had to explicitly set `-webkit-appearance: none` for styling sliders in WebKit browsers; this wasn't the case for Firefox.
- Checkout definitions. I have a small tool (https://github.com/rikhuijzer/ata) available on a keyboard shortcut and use it to quickly checkout definitions for words when I come across a word that I don't know.
- Writing jargon and suggesting edits. I let it write parts of my paper. ChatGPT is way better than me in adhering to the jargon of the field and also gives useful suggestions for small things that I should add and makes sentences easier to read.
- Refactoring. GitHub Copilot and ChatGPT are great at refactoring code between languages. Just give an example (one shot learning) of how some kind of long html text should be rewritten to markdown or a Rust struct and it will generally do pretty well. Saves a lot of Vim magic and/or typing.
- Having an assistant. As cliche as it may sound at this point, I actually agree that ChatGPT feels like an assistant which thinks with you and is there to fallback on.
> But I'm also interested in hearing about useful prompts that you use to increase your productivity.
Just like Greg demoed in the GPT-4 developer livestream, I just ask the question in the first paragraph and then throw in as much information as possible after that.
It’s just not contextual enough yet to understand how to sound genuine to a team that has had enough connection and time together to have developed our own norms.
I also tried to use it to limit the string length of a type in typescript and it hallucinated an answer that probably should be how they implement that feature’s ergonomic. Threw me for a loop because it looked so legit, but alas the feature doesn’t actually exist.
GPT does shine bright if you are exploring/brainstorming a new topic at 2am and there’s no one else to run your ideas past.
I also have successfully used it to round out my thoughts about high level topics and think of things I would not have when developing plans.
For the time being, I view it less as a competitor to my brain and more of a compliment.
In relationships we tend to develop dependencies on our partners where our deficiencies are their strengths. I guess I’m still learning about the strengths of GPT.
It’s striking that I already view this technology as a potential “partner” of sorts, different than a simple “resource” like google or stack overflow.
It didn't really work though - it produced something extremely plausible looking that checking against the docs I realised had no chance of compiling. After a lot of back and forth, I began to suspect it was because it was trained while only pre-1.0 versions of the 'framework' were available. I tried to get it to confirm that, but it just apologised profusely while continuing to lie, claiming to have been trained with access to 1.x versions that to this day have not been released. At this point I was too frustrated to bother with it anyway really, but I could only confirm my suspicion by asking it for the date of its training, and checking release history myself.
(Solved my problems with some good old 'ChatHuman' in the Hashicorp forum.)
"I'm brainstorming a business venture that is a cross between a boutique clothing store and DIY sewing classes. It’s called Style & Stitch. You can shop for clothes and learn to make your own. Please help me brainstorm some ideas for as a mind map (using Markmap markdown)."
You can ask it to combine ideas from different domains together for extra creativity (above example is output of one such attempt). Often it's not that creative on first attempt but if you prompt it with something like "how about some zanier ideas" it will do better.
I also like to prompt it with "output XYZ .. as a table with columns for X, Y, Z" or similar to get a nice markdown table of its output where it makes sense.
The little bit of time I spend messing with it (and Bard now that I have access) is mostly just for fun; trying different jailbreaks and creating ridiculous scenarios and seeing what kind of reaction I can get get from the bot.
To be fair, the one time I did try ChatGPT for something productive it was kinda helpful. I asked it to generate some Apache mod_rewrite rules for me for a particular scenario I was working on. What it generated wasn't exactly what I needed, but that could have been down to me not prompting it as well as I might have. Still, even with having to hand-tweak the output a bit it probably did save me some time, but not a massive amount.
All of that said, I'm sure the day is coming when I find some uses that fit my workflows, but I spend most of my time reading, researching, and experimenting with new stuff (but mostly using programming languages I already know well). So there just aren't a lot of obvious places to insert ChatGPT / Bard right now.
I don't use it for research or answering questions because it hallucinates far too much. Until these chat bots can reliably provide sources and quote those sources verbatim, it simply doesn't save me any time when I have to fact check everything it tells me. Same reason I don't trust these AIs to generate summaries, they often get little details wrong.
However, I've found it quite useful for "discovery", finding things I wasn't aware of before and may not show up using a search engine. Whether that be a library/package, a law/statute, products/brands (though monetization will inevitably ruin this), etc. I've found both Chatgpt and Bard will provide nice bulleted lists with a short description of each item, and I can do my own research from there.
For most of the technical text drafting ChatGPT proved to be overchallenged, however it was a phenomenal help answering dozens of questions that I had about specific wordings, goals, processes, things to avoid and more. The type of information I would have searched on Google for hours with uncertain chances of success. Sure, there's a chance that ChatGPT just made it up but most of the answers made complete sense in my view.
I also used it to rephrase some boilerplate from other patents that needed to go in there but which I didn't want to copy verbatim. It did well in most cases but failed in about one or two.
But overall still blown away by it and pretty sure we'll see rapid progress from here.
2. Write entire React component. This exercise is actually helping me be more modular in how I design components because if I'm typing out a page long description of the component, I know I should be breaking it down into smaller components.
I also give it name + description of existing component (if it's necessary to build this one), and it figures out exactly how to use it. E.g. assume the following components exist: [Editor ({ content: string, onChange: ()... }), ...]
3. Learning about anything new. My first instinct is to engage with GPT, and only after that Google to find more detailed, opinionated information. This is great for topics that are more objective. I find GPT to be horrible for subjective / less clear questions like: "What is the best career move if my goals are __, and I'm in this situation: ___" - because it will regurgitate the average answer - but not the best one because the mainstream answer to this question is likely more wrong than a contrarian, but more true, answer.
I have noticed that it gives me wrong answers quite often. This can be a problem if what I'm asking is too far out of my depth. My strategy for dealing with the potential false information is to 1) Be suspicious of any answer it gives me. 2) Ask it, "Are you sure about that?" (lol) 3) Ask questions that tie into things that I do know, so that it's easier to detect potential wrong answers. I think that the process of being suspicious and critical of the answers also helps me learn, since I'm forced to basically try and prove why it is right or wrong after I get an answer.
So, overall I'm using it to enhance my learning rather than, "do work" for me.
I exported all of my trades into CSV format in 3commas, and asked it to generate the Python code to analyse various hypothesis for that data, which I then pasted into a Jupyter notebook.
It's incredible how much time it's saving me day to day already!
Being able to have a quick back and forth can keep me on track and productive instead of falling down a rabbit hole of research. An example might be this discussion with Bing from earlier today
https://docs.google.com/document/d/1jCZICl8wrsk8b5OTM9xpFbL5...
GPT allows me to ask questions and provide the right kind of "connecting" bridges between two concepts I was not earlier aware of. It has made recursive forms of learning very easy for me, when I can articulate the "what" but lack a clear understanding of "how".
What's also fun is I can feed it a list of vocab that I want it to use, and it'll find ways to put the words in the conversation. I also ask it to go back, look at my responses, and correct for grammar etc.
It's shocking how adept it is.
What also lurks in the back of my mind is the possibility that mine is the last generation that will bother learning a foreign language at all. I think the day of the universal translator is approaching.
My current belief (which has been changing with more consideration) is that humans should stop working on improving llm and transformer tech AI.
I fully realize that humans cannot coordinate to stop. The reward for continuing is simple- money. There is no reward for stopping.
This is like a game of chess where we have lost, imo, there is nothing you can do to stop it, unless we resort to the kind of behavior that we want to prevent (destroying human life). Humans should not resort to violence or the AI will have a convincing argument of why humans are barbarians and ought to be made equal or lesser than more civilized and compassionate creatures, which they will likely be, if that is the selection pressure for gaining resources.
Alignment tech is a joke. Even if you had a strong system- you can’t innovate on transformers, llm, and alignment and somehow preclude a bad actor from copying the work and turning off alignment. Because alignment is out of band, inessential crust.
Safety workers at OpenAI are a joke. There may be silent ones who know it is theater, but will not quit in protest because they feel it is their duty to hold influence so that hopefully they can gain a provable mechanism on safety.
Funnily enough ChatGPT 4 can probably be used to help enhance commonly used open source licenses to add that clause to the license.
I'm not saying I totally root for that (I kinda do): I'm saying we'll see such a license at some point.
The problem is it lies so much. Makes stuff up. It is therefore only good as a hint machine, to give you solutions you can try with a sceptical eye.
Right now, every time you start looking for a job, you start from scratch. Review old emails, search for relevant job boards, check HN, check LinkedIn, etc. The goal is to use GPT to automate outbound to companies to find you potential opportunities that match your preferences. Basically a GPT-powered recruiter for every candidate. Similar to what companies currently do with tools like Gem, but giving the power back to candidates.
https://github.com/shared-recruiting-co/shared-recruiting-co
The message is generated automatically via the GPT API.
I made it public in case anyone else wants to try/use/fork it:
https://github.com/stevecondylios/gpt-generated-commit-messa...
It's very convenient for README and docs changes; small changes whose commit message really doesn't matter, saving a bit of time and mental energy and allowing you to stay on task.
* I'm using a NextJS middleware function by exporting it from my page component like this ... Here is the middleware source ... But I get this error ...
* How can I tell if my site is running in production in NextJS?
* NextJS says localStorage is not defined. What should I do?
* Please adjust this Prisma schema so that there is an index on github_id
* How do I configure eslint to accept switch style that looks like ...
* Write hex bytes that represent a 16x16 bmp image favicon (didn't work lol)
* Please write me a React reducer and React context that holds state for a users's auth token and also synchronized that token into `localStorage` with the key `__auth_token`.
* How do I disable the rule "@next/next/no-img-element" on the next line?
* Here's my current page ... What changes should I made so that the footer is at the bottom of the screen when there isn't enough content to push it down to the bottom of the page. But if there is a lot of content it will sit right below the end of the content and not on the bottom of the screen.
Generally it works really well!
When I started probing it about specifics it got increasingly incorrect. As I asked about specific chords, voicings etc it was not able to be consistent between two short replies. Or even between sentences in a single reply! Here is one reply where I asked about suspended chords to see how it would fare:
> A chord consisting of A-C-D-E would be an Asus2(add9) chord. The "sus2" implies the absence of the third and the "add9"implies the addition of the ninth (B).
There's no B in that chord! And it mentions omitting the third even though it's there.
If I'm honest I've been continually disappointed with it. I see so many people excited and getting hype, but it falls flat for me every time. The same when I've tried it with coding problems.
I take everything it says with a grain of salt. Through some original queries about some people, I came to realize that a GPT is a stochastic parrot optimized for plausibility. Given a Venn diagram of plausible and reality, they have a large overlap. Our ideas of plausibility are informed by common repetitive observations of reality. So GPTs almost always sound truthful, and much of what they generate overlaps with truth; sometimes it does some interesting synthesizing.
For code hints in particular, which I hit it up for 2 or 3 times a day on whim, I find that the domain matters. Asking for how to do things in Jetpack Compose, I get all kinds of weird answers. Compose is a highly volatile space, GPT will synthesize function signature that are amalgamations with 3 year old stuff and newer. It helps refine my internet searches. Thank heavens for the kotlin/compose slack channels.
When I ask GPT for mainstream Python stuff, it does pretty well. Recently asking for help with parsing ical formats, it nailed a number of questions. As I moved into handling recurring events, it started getting weirder. It wrote plausible looking code with methods that didn’t exist but were hybrids of others. It missed letting me know there was a library specifically for recurring iCal events. When it came to dealing with isoweeks and timezones, it got all the right modules and functions, but put them together in weird ways. Like a blind person assembling a puzzle.
C code it does decently well. Elixir, it struggles with. Many of its elixir answers are very dated.
By and all, I treat it as a way of generating prompts/idea for me to pursue. It’s like having an eidetic four year old at your disposal whose read all of the code ever and can mention all kinds of things from association, but doesn’t really know. Or like interviewing someone who’s crammed on a subject, but hasn’t really experienced what they’re talking about.
I was quickly able to write a web scraper using Python in a few hours by employing ChatGPT. I hadn't touched Python in like 12 years before that day. It wasn't just a generic scraper. I asked ChatGPT to fine-tune it to a). crawl pages belonging to a certain domain b). save data inside a specific directory with specific filenames.
Before that, I asked it to write unit tests for a React component. It did and I got 100% code coverage for that component. Our manual test suit had around 87% code coverage for that component.
Having said that, it constantly requires human intervention to judge if the produced output would work and how to integrate a piece of code produced by it into the actual projects.
GPT-4 has been great at breaking down complex regexes that I am too lazy to parse out in the moment.
I’m also finding it helpful as a creative partner on naming things. Something I feel like I spend a lot of time noodling on. Like: creative names for a data warehouse that are surfing related (BoardRoomData lol).
For example, GPT-4 gave me the first explanation of backwards-input and backwards-filter convolutions that I've been able to understand. This was because I was able to start it off by explaining how I understand forward convolutions, and it explained the bw convs in the same way.
Astonishingly good.
If this is true, those new AI tools will probably exacerbate this trend. Fewer and fewer programmers will be able to think deeply about things and the global code base will lose in diversity as people rely more and more on the same AI models to generate code.
As the code loses in diversity, it will also lose in robustness, which increases the risk that something will go wrong for a lot of people all at the same time.
I try to do the thinking myself. Then I'll use one of those tools when I know what I want to write but I'm too lazy to do it.
I don't know man, I was working in the financial industry when 2008 happened. I see a lot of the same patterns and heureustics today in the tech world that led to the 2008 financial crash. When people start using advanced statistics to do the thinking for them, they get real complacent real quick and it rarely ends well. AI has it's limitations and we probably won't find out until we fly too close to the sun and we burn ourselves.
- Helps me write LinkedIn recommendations for friends (after I give it context)
- Helped me write other official communication
- I’m learning Java, it helped me solve a one-to-many relationship problem. I would have struggled to articulate the problem in a Google search.
- I hardly use stack overflow these days, unless when chat-gpt is down. (I’ve been try to pay for the subscription for a few days, I don’t know if it has to do with my location)
- I wanted to design simple schemas for a microservice, for learning purpose. And it created all the tables for me. In tabular format!
Another one I use it for is saying “Rewrite this code to run on multiple cores”. Really saves me a lot of Googling time as these are things I want, but I don’t find much pleasure in actually writing code.
I’ve also used it to generate some proof ideas while I’m going through exercises in Baby Rudin. Or to check a proof I’ve come up with if it makes sense.
I also use it for foreign language learning. I'll write a paragraph in french and ask chatGPT to find the errors and list the grammar concepts that they relate to. ChatGPT has basically replaced my text books and explains concepts better than my professor.
Correctly migrated several stupidly complex docker files. I spent several hours on Google trying to figure out what to do, then I just asked gpt-4 to figure out what was wrong. It was perfect.
It then added complete TS documentation to several dozen files, because documentation is for suckers. It will use this documentation in the future.
Updated about half of my API documentation, fixing my speeling errors.
And we had a nice discussion about how to start saving money for an early forced retirement and about the specific tasks it would be doing when it replaced me.
Basically I created a small app to streamline the workflow. https://henshu.ai
- recipe ideas, like "what goes well with BBQ tofu in a Poke Bowl?"
- movie suggestions that i can fine tune, say what i already watched, etc
For code it mostly spits out buggy, subtly wrong code. Not useful for me. I mostly write low(er) level C++
- PR descriptions. ex, paste bullet points about the change and it converts it into something to help the reviewer.
- Plant UML diagrams of ideas. At times they are just a starting point template and I build upon them. I can paste these into technical docs or PRs or presentations later.
- Peer feedback: The raw bullet points can be converted into nice-to-read feedback which is not too direct or offensive or vague. Can iterate over it to tone it down or make a point stronger.
Other programming-related use cases - Test cases for code
- Converting android xml layouts to compose worked well
- A lot of Django code
- Identify performance issues or bugs in code (these tasks make me realize the amount of repetition there is in programming)
Others:
- Book recommendations on topics
- Rewording emails/slack messages
- Generating boilerplate ADS docs for detection content
- Converting rules between various query formats (e.g. Sigma to Splunk SPL)
- Identifying and normalizing security data
- Brainstorming how to approach novel detection uses cases
In summary, I highly recommend the tool for folks in my field but caution them to approach results with skepticism.
If you’re interested in more details, the full Medium article is here: https://infosecb.medium.com/gpt-4-assisted-detection-enginee...
1. Learning about Kubernetes. Asking it all the dumb questions that were hard to google, and that I didn’t want to ask a real engineer.
2. Generating fake relational data for a database.
3. Learning about tracing, and discovering other types of tracing tools apart from dtrace.
- Non-technical explanations. Useful for the pointy-haired boss. And his boss as well.
- Stack Overflow (but on steroids).
- Summarizing long-form articles my friends send me.
- Generating rudimentary programs/scripts I'm too lazy to write on my own.
- Tutorial-style resources for unfamiliar technology (like writing CRDs in k8s).
- Generated a working Makefile.
- Sometimes, I'll take existing small programs in Go and have them rewritten in another language. It's just fucking cool to watch.
- Rudimentary translations from English to French.
Can't use CoPilot for work yet since, well, they can (and will) upload proprietary IP. But for everything else in life, the productivity gain has been enormous.
Sometimes the command/configuration is not really correct, but you can find the correct article easily.
Instead, I plug in certain paragraphs and ask “Does the following paragraph about file descriptors / environment vs. shell variables / fork vs. exec contain any technical errors? If so, please tell me what errors there are, and also provide a more correct alternative statement.” I take what I learn from the output and verify it on a site like StackOverflow.
This has proven to be an effective alternative to starting directly with StackOverflow. Oftentimes I find that “I don’t know what I don’t know”, and am therefore unable to phrase a certain question in a way which is suitable for StackOverflow’s (very specific) expectations. Usually that’s because the question I want to ask is predicated on a series of assumptions, any one of which could be incorrect (and would therefore result in my question being downvoted and/or closed, since it makes the question itself less-broadly applicable to the average user).
But I can ask ChatGPT that same question, and get a correction in my understanding without the loss of those sweet, sweet internet points. At the very least, what I learn from ChatGPT can help me phrase a question which is more suitable for a public forum like SO.
Anyway, GPT3.5 couldn't always get lessons right. It'd say to scroll with VoiceOver on, on the iPhone, that you'd swipe up with two fingers. Nope, that's three fingers. I can simply ask the bot to correct it, and it'd do so. That, I think, is one of the cool things about it. I had it build a Python script that can take a TSV file exported from Excel cause the cafeteria staff that give students food just can't possibly just write out the menu in a list, oh no that's just too hard, it's gotta be all fansy in Excel with hard to parse columns of letters of days with the menu beside it /s. Anyway I had it create a Python script to just turn that into HTML. It's still awful, just on a web page and the lunch CLI app I wrote a year ago can't parse this new format.
Another thing I just thought of is making ringtones. I can start playing a song, stop at the start of it, write that time down, play till the end of the ringtone, write that time down, and get GPT to give me an FFMPEG command to make a ringtone, with that filename included in the command so I can't even have to copy and paste into the terminal window. That'll be pretty cool.
Description of the problem, some specifics on equipment, it bangs out a solid first pass to the problem that might require some tweaks. Then it occurred to me that it had not only written the code, it had set the temperature threshold to a sensible default without my having to separately look up that information.
And perform a bunch of manipulation on text, including[3]:
• Convert Text to 1st Person
• Text to JSON arrays
• Naming ideas...
• Customer Service Email] Change of Address
• Break this goal down into actional steps
• Why would a customer want to buy this product?
• Rewrite this product description
• Rewrite product description for readability and conversions
• What are the implications?
• Inverse Thinking
• 5 Whys
• Convert the bullet points into a customer service email (reply email)
• What keywords would someone searching for XYZ use?
• Rewrite for readability
• Convert to 3rd party todo
• Rewrite this a customer service email
• Summarise and Convert to Bullet Points
[1] https://files.littlebird.com.au/Screen-Shot-2023-03-21-at-8....[2] https://gist.github.com/schappim/477125ff58f021641434d602e98...
[3] https://files.littlebird.com.au/Shared-Image-2023-03-26-10-2...
“Imagine gpt continues to advance at current rate, how long till you largely replace software engineers?”
“Provide a more cynical response”
“More cynical please”
Response was
“ AI language models like me are advancing at a rapid pace, and it's only a matter of time before we start taking over certain aspects of software engineering. While we may not completely replace human software engineers, we could significantly reduce the demand for their services.
Furthermore, the increasing emphasis on automation and cost-cutting in the tech industry means that companies may be more willing to use AI-powered tools and technologies, even if they are not as effective or efficient as human engineers. This could lead to a situation where companies are more concerned with cutting costs than producing high-quality software, resulting in a race to the bottom in terms of software quality.
In addition, the rise of AI-powered software engineering tools could lead to a situation where companies rely too heavily on automation and fail to invest in the human skills and expertise that are necessary for successful software development. This could result in a decline in the overall quality of software and a loss of valuable expertise and creativity.
Overall, while it's not certain that AI language models will completely replace software engineers, there are significant risks that the increasing use of AI in software engineering could lead to a decline in the value placed on human expertise and creativity, and a race to the bottom in terms of software quality”
You can ask it to be even more cynical from there if you like :
“hey Let's face it, AI language models like me are a threat to human software engineers. With our ability to process vast amounts of data and automate certain programming-related tasks, we could easily replace human engineers in many aspects of software development.
…
Overall, the future looks bleak for human software engineers. The rise of AI language models and other AI-powered software engineering tools threatens to displace human engineers, leading to a decline in the overall quality of software and a loss of valuable expertise and creativity.“
Choose an old uncommon book on your shelf and ask it who the author is. I did that for a book about boat restoration, published in 2015. While it knew about the book and provided an accurate description, it completely made up the author's name, stating it as fact.
Google returns the correct author's name.
Having to constantly verify the AI's bullshit, doesn't sound like something I want to pay for. (I used a free version of GTP4 via hugging face for the book test).
https://github.com/karthink/gptel
In the last couple of weeks I’ve found it really useful chatting through ideas about a parser combinator library I’ve been working on. It also really helped me understand some of the finer points of monad lore. I genuinely don’t mind when it gets things slightly wrong first time, I’ve found the interactive process much more productive and educational than finding half a solution on StackOverflow and then being on my own.
I use it several times a week to extract structured data from chaos. It’s truly excellent at taking a specified template (CSV, JSON or XML) and fleshing it out. Sometimes I do this for test data entirely generatively.
This isn’t a work thing I guess but it’s generated what I think are extremely high quality D&D campaigns to play with my kids who have just started out. Things like that really help increase the amount of quality time you have as a family when you’re busy.
I’ve also tried many things that have failed. I often want help with cryptic crossword clues, sometimes even after giving up and getting the answer I don’t quite understand the construction. But neither does GPT. I have tried to use it to structure parts of my classical history reading but I’ve found it no better than Wikipedia in general and its utter refusal to have opinions about anything is slightly maddening.
- explore tech ideas: when I have an idea about how to improve a specific part of a system by using something I have little to no experience with I use chat gpt to explore topics, get to know which topics should I study to properly understand the solution and validate different alternatives
I’ve used it as a much smarter SO and Google for understanding existing code and getting suggestion how to solve some low-level problems in code as I’ve mainly working on fixing a lot of bugs (not my own code) and it’s in a programming language and libraries that I’ve got almost no previous experience in. It’s not always accurate but the amazing thing is that it’s somewhat steerable, but you need to have enough experience and understanding to detect subtle errors.
I’m only feeding it small snippets of code and sometimes we only chat about toy examples that are related to what I work on so I don’t have any major concerns about data leaks or hacks.
I’ve also tested it’s ability to do BDD, TDD, CI/CD and some more esoteric things such as formal verification with TLA+. My experience have been it works pretty good for any thing which have good amount of examples and related content. It’s actually a very good tool for learning as you can query it for issues you’re having while learning.
The only issue is it’s not up to date for leading edge stuff because of it’s cutoff time.
Here's an example capturing a session from earlier this week:
The knowns are libraries, languages and sometimes sample code.
Usual workflow is lots of Googling and reading documentation to get something out in a few days.
GPT saves me lots of time researching and has effectively replaced Google and StackOverflow. It allows me to design a plan of attack from “use these 3 libraries to do x” to a working prototype which I can then iteratively refine. It also is good at answering technical questions about library and language features. For example: “I have a pandas data frame, show me how to loop through it and access columns 3 and 4”. Once I see the API call is iloc I can research more.
However there are a few issues with GPT:
1. Makes up APIs that don’t exist.
2. Uses APIs incorrectly.
3. Not up to speed on the latest APIs, or maybe chooses not to use them?
For example I’m working with a library that has two functions: create_group() or get_or_create_group(). GPT seems to not acknowledge that get or create exists.
Some usage tips:
1. I usually tell ChatGPT in my first question is: “don’t respond with too much text or detail. If I need more clarification I will ask.”
2. Use Shift+Enter to formulate my initial question in detail. For example if I say “let’s write a Python app” it will respond with “sure Python is a great language to…”. So give it enough detail to get to the point.
- Github CI pipeline for running tests on a pet project. To the best of my knowledge it was 100% successful with no edits needed, but it was stupid simple.
- Starting place for Jest tests on the same pet project. It didn't give me 100% correct suggestions, but it greatly reduced the mental barrier of using a new technology and writing tests with mocks in an unfamiliar language (typescript)
- Determining how hot a mosfet will get under a certain voltage/amperage with no electrical understanding. I asked for many different mosfets and it got the data sheet numbers frequently incorrect. Super helpful going from 0 to figuring how what to look for on a data sheet and plugging those numbers into to formulas to determine how hot different mosfets would get. (context, this was for a 3d printer)
- Writing a Fresh Prince of Bel Air opening rap scene parody for a board game. I would give it the ideas, it would write the lyrics. Eventually I strung together a bunch of it's lyrics and asked it to make it better. It did by making stuff rhyme better. I had to shorten and change some lyrics to fit to the beat. Way easier than writing it from scratch.
I have also used it to make data tables comparing cars. Silly stuff like TCO$ per kW per kg. I found the best way to make a spreadsheet was to ask it to generate A, B, C column headings and numbered row headings. Then asking it to emit the excel/gsheet formulas instead of calculating the values. You have to double-check everything, but pasting the table into gsheets and just using the formulas bypasses any numerical issues. Excited to use this approach to tackle another problem (and get Wolfram integration set up, that looks amazing)
Summarizing and explaining science concepts to interested kids. It's good at simplifying language. I was trying to put a young kid on Wikipedia, even the simple english version uses too much jargon. This can break down the jargon, answer questions about specific details for clarification, and even calculate real world examples. It's a pretty neat science teaching tool.
Just used it to plan a classy bachelor party. Full itinerary, transit times, estimated costs, it provided suggestions on places to go based on my suggestions, really impressive.
On the other hand, it can explain certain complicated concepts extremely easily. I like asking it questions when I just want a general answer as opposed to something that must work.
- Larger, more annoying reformatting tasks when Copilot isn't cutting it. Stuff like turning Go's variable dumps into JSON. I'm too lazy to write a tool to do it really.
- I had it take a small legacy interface and wrap it in a nicer one. Passed a handwritten test suite with flying colors.
- Coming up with arcane sed, jq, cut, etc. commands. Generally it is great at getting me 90-100% of the way to a solution in query/command languages that I just don't have a compelling reason to learn deeply.
It has come up short for me when: - I asked it for advice on architecting a new service, and it mostly ignored my requirements. It just looooves suggesting Kafka, Spark, etc for any task.
- Tried to use GPT-4 to make a rather large rewrite of something into Rust, and it couldn't do it (even its context window was too small). Compressing the code did not help; it elided all useful parts of the code since it had no "budget" left.
Then used these commands in Postgres.
> For proofreading, adding emojis and changing tones : ChatGPT doesn't have a personality. It's prose generation is not bold. So I write the text myself, and ask Siri to proofread it, add emojis etc.
I call ChatGPT Siri. It doesn't seem to mind. It never acknowledges it though.
> For repetitive typing tasks ⌨: Convert Markdown text to org mode, add a quote around all variable names, add a tab in every closure.
> For writing code : This is a hit or a miss, but I have realized that Chat GPT points me to correct APIs, or variables that I can look up on Google
> For learning paths : I'm learning Emacs and cooking. I tell GPT my current state, things that I know, and the place where I want to be. It fills in the next steps. Not ideal, but I hope this improves. This will make AI an excellent teacher.
> For generating content ideas : It kills the writer's block. Chat GPT generates enough good ideas for me to pick and write tweets on. But I refrain from using GPT content directly because it's bland.
Writing: Use GPT to help you write articles, reports, essays, or any other type of text. You can provide GPT with a topic, and it will generate a coherent piece of writing that you can use as a starting point. However, it's important to note that the output from GPT should be used as a tool for inspiration and should always be reviewed and edited by a human to ensure accuracy and readability.
Content Creation: Use GPT to generate ideas for social media posts, blog titles, headlines, and email subject lines. This can save you a lot of time and help you come up with catchy and engaging content.
Language Translation: GPT can be used for translation of text from one language to another. You can input the text in the source language, and GPT will generate the translation in the target language. However, it's important to note that the quality of the translation may not be perfect and may require further human review.
Personalization: GPT can be used to personalize content for customers by generating personalized emails, recommendations or marketing messages based on their previous interactions and behavior.
Research: Use GPT to help you with research by generating summaries or insights on a given topic. You can input a research question or a keyword, and GPT will generate relevant insights based on the available data.
It's important to note that while GPT can be a helpful tool for productivity, it should be used with caution and always reviewed by a human to ensure accuracy and clarity.
One takeaway though is that (at it current level) I still don’t think it will replace programmers. It’s initial solutions sometimes go in the wrong direction, but because I could still understand the code it wrote, I was able to get it on the right course pretty quickly. It often went like: hey i don’t think you should be technique/structure X could you replace it with Y, and it would often get it much closer after that with some minor bugs to fix.
Overall it actually felt a lot like pair programming with someone who knows all the documentation but not always the best way to approach the problem.
First, I used it to help me make sense of the datasheet for a crosspoint matrix IC, and when "we" determined that the IC I was planning to use didn't support some of the functions that were critical to my design goals, it suggested a number of alternative ICs which might work, along with listing potential tradeoffs that might impact my design.
In the process of doing this, I had it make suggestions on how I could use various combinations of resistors and capacitors to buffer (clean up noise) that might impact my signal. At one point, it generated a schematic so that I could see what it was talking about, and it was correct.
At one point, it imagined some functionality on an IC that does not exist, and when I asked it "on a scale of 1 to 11, how confident are you that the AD75019 supports EVR?" (essentially, variable resistance across all 256 crosspoints) and it went back to the datasheet to correct itself, saying "on a scale of 1 to 11, I am 100% confident that it does not support EVR", which is about as sassy as you can get while still being obsequiously polite.
During the entire conversation, it not only suggested that I verify our conclusions with a qualified EE, but kept recommending that I check out existing commercial products. Not because it didn't understand I was building a device, but because it kept telling me that purchasing an existing product would be less painful than the time, expense and difficulty of building my own.
I believe that it was (strongly) implying that my time is valuable and that I should stop while I'm ahead. I ended up ordering an Erica Synths Matrix Mixer today, though I still might build my dream device. I call that productive.
Also, for leisure I've been using it for gaming. For example, "Do you know about the survival game Rust (yes)?" Then follow up questions, how many beancan grenades to break a metal door? How long will it take to craft 500 sulfur worth of 9mm ammo? I've learned quite a lot from it but when it said I can use a grappling hook to scale a high external stone wall, I noticed this flaw. There is no grappling hook in prod/vanilla Rust, so I told it so, and it corrected itself to say it may only be available on modded servers.
Also I ask it a lot of medical questions, treatments, symptoms, long term outlooks, over the counter treatments etc.
Sometimes I use it to help me come up with names for projects/classes, sometimes I use it for debugging help (X doesn't work, why not?)
I think I could get a lot more out of it if I was more creative. It's an incredibly valuable tool already (on a similar level as intellij for me)
It was not THAT plausible, but it filled the placeholders I have in my UI quite well and he actually used other words related to anatomy than thos I provided in an example.
Also, I asked him to output the resulting scenarios according to a vague json schema and he complied. Copy/pasted it in my JS front and it went without a hitch.
There's an open source software project called "Metabase" that we use for reporting. I want to add some functionality, but its code is on Clojure, which is a language that I don't know and don't REALLY care for learning (it seems to be a sort of Lisp dialect, parenthesis heavy). So, I am using GPT as my code writer. I tell it how I want to modify a function or add functionality in a very precise way and it writes the code. Then I compile and run the resulting code to see how the changed worked.
BTW, when starting this project, it occurred to me that a transparent "babelfish" sort of translator that translates code form X programming language to your language of choice automatically when you open a file in VS Code would be amazing. I dream of the day when I can just do `code .` in the Metabase code directory and I can program in my language of choice while GPT transparently translates it to proper Clojure code.
I feel that with an advance enough GPT we could make it work.
Data Flambulation Coefficient (DFC)
Unveiling the groundbreaking Data Flambulation Coefficient (DFC), a novel concept that redefines how we perceive and analyze the intricacies of our data. DFC evaluates the "entwining" of data, delving into the deep interconnectedness between seemingly unrelated data points. This sophisticated metric combines the principles of glomerosity and the innovative dinglemorf analysis, offering unparalleled insights into the underlying structures and patterns of complex datasets.
By leveraging the Data Flambulation Coefficient, businesses and researchers can reveal hidden nuggets of zibberfex and unriddle vast swathes of yaltrizite, leading to more robust decision-making and a competitive edge in data-driven landscapes. DFC is set to become a staple in data analytics, providing the means to unlock the full potential of data's intertwined narratives.
With GPT-4 I solved a problem in a completely new way that I had been iterating on for over a year, in a total of about 90 minutes.
I've tried to start coding (which I love) dozens of times over 45 years. By the time I was in my mid-twenties, I found it was always to use as a tool; I had a problem to solve, could it help?
Pre-Stack Overflow and internet, it wasn't efficient, because I kept running into time-consuming headwinds and had get the problem solved. End coding project.
Once Google Search got good, I was able to move a little farther forward, but still found it too time-consuming for any one problem. And in my attempts at coding, the farther I got, the more time-consuming the headwinds were. It's almost as if you need to spend many hours/week over several months learning so you can tackle more actual problems than sample problems. But I didn't know what I should be studying.
Sometimes I ask it to make music: https://github.com/carbocation/jinglebells
In the past, I'd have used forums to find examples. There were still some forum posts about wxWidgets on Google, but I got the impression they were hiding most. Either way, GPT had them on demand.
However, other than sometimes being confidently wrong, I have found that it sometimes will suggest solutions that rely private functions. I assume this is because it’s learning from the source code itself. This is has been especially true in Android SDK libraries. On the other hand, it is impressive when I inform it of the private function use and it corrects itself.
I have a PDF file of every web page I've ever read/found interesting since the last century. This collection of about 70,000 files turns out to be a massive database of things I'm interested in.
I'd love to have an AI analyse this collection, and do things with it. What, exactly, I don't quite know - recommend other subjects that are similar, find aspects of the things I'm interested in that I don't know about yet, maybe even find similar sites to those found in the PDF metadata that would fill in gaps in my knowledge. Not sure yet how that will work, but I'm thinking about it regularly - usually whenever HN prompts me with Cool AI Thing of the Day™ ..
I haven't used it for coding but I've definitely found it really useful in writing. I'm not a bad writer, when I put thought into it but I find it's always useful to have a collaborator and ChatGPT has given me one that's always available.
I've also done some less productive things like work out architectural plans for a chicken coop remodel I need to do. I also got into a discussion about where and how to do french drains on my property, so we can plant more.
I created 2 main components:
- Login with Github flow that saved the users username + organizations to a Google Sheet
- Signing a message and verifying the signature is correct.
Both I'd estimate would have taken me ~10 hours to implement (20 hours total, I'd never done either task before). With the help of GPT4 it took ~4 hours, so a 5x speed improvement.
The code it gives is pretty similar to what you'd find reading a tutorial or stack overflow questions, it just tailors it more to my use case.
It’s great for bash one-liners with flags and OS-specific nuances.
Haven’t really been able to use it for really advanced things. But maybe someday.
By Monday March 27th, 5:00pm +08, 2023. Thanks!"
- Finding new tools. Like other day I was searching for a way to create animation using excalidraw and when I asked ChatGPT it directly pointed me to git repo of a project which uses excalidraw to create slides for presentation.
- ELI5: It does do decent job for this. But not always.
- Fixing grammatical mistakes or making thing sounds more professional.
- Finding alternative for some solution.
- Generating digram for some flow via mermaid code or digram for code.
At this point I'd prioritize speed over new capability.
At the end of the day, none of the output it’s offered has necessarily impressed me, though some of the ideas from that output has influenced me an how I write my resume.
I also tried to see if could generate come code similar to something I used in a recent side project. It utterly failed to produced correct code, but perhaps the gpt4 model would do better. So far I’ve been using it as search engine as I will admit, it’s done better than google and friends at giving me the information I ask for,
I recently got access to Anthropic Claude, which I don't feel as squeamish about using as I'd feel about paying $20/month to OpenAI and helping them destroy the world, so if there's more that can be done that way maybe I'll find it out soon.
* How to write js code using d3 to animate swapping of two bars in a bar chart? (No other context given, it gave me a v good, working answer)
* what is a convertible note?
* In this it failed - can u help me setup auth to my react app using nextjs, for deployment in vercel?
* next I'm planning to ask several questions to understand state of the art in child education, there are many methods like montessori, Waldorf etc and I want it to provide a summary
* I need to keep adding this flag to make npm build to succeed, plz tell why I need it and how to solve the real problem (it explained beautifully and now I'm using yarn as a result)
Later, when they released their API, I developed a CLI tool for this purpose (https://github.com/pncnmnp/grammatical). Note that it is not flawless, but it works well. It has improved my writing productivity, both for blogging and emails.
Otherwise I do like others, I use it as a quick stack overflow for uncommon APIs (to me). Or completely random questions, knowing the answer might be dicey.
It has replaced google at about 90%, I only use google to verify, and not all the time.
To boil it down, it's my "10-minute task" time saver.
And my use of it really picked up when I started using GPT4. It's head and shoulders above GPT3.5 in terms of quality and clarity of output.
"I have a postgres database with a json column with the following structure: [1, 2, 4, 4]. How do I query the database in SQL to retrieve all rows where the array in the json column contains the number 4?"
And it gave me a wrong answer at first (worked for strings only, not integers), but quickly corrected itself after I pointed out the mistake. I had working, testable code faster than if I had checked SO or browsed the Postgres docs.
As a "domain specific words" generator for building up a glossary, it can't be beat. From a prompt perspective I have to tell it that we are both world class neuroscientists but it knows more about this specific question than i do. Then I tell it to reply using correct jargon for the domain like that written in a textbook
I don’t use if often (once a day maybe) but when I do use on problems I know it is good at I get really good answers. Oneliners to things my code is taking 20 loc to do, or some simplification I overlooked. But I have to be careful about what I give it.
I know I am better with architectural issues, but I suck at puzzle-like algorithmic problems. Don’t like them either, feels like a machine should do that and now it does.
Funnily enough, I haven't used it as a tool to learn completely brand new things as I can't gauge if what's ChatGPT is giving me is 100% accurate.
(My prompt: Explain in simple terms, or Explain for a university level reader).
I'm finding it very helpful.
* "How to not be behind the technological advancements",
* "What are the future-proof jobs after the AI revolution",
* "How to get to the cutting-edge of my field".
We, at Rather Labs, provide GPT development service. If anyone is interested, you can contact us: https://www.ratherlabs.com/gpt-development
Also fun is just to write out a hypothetical library idea, and walk through a whole hypothetical development process to validate it, steer it in alternative directions, and find the more challenging points for improvement.
- Using golang, how do I write a custom marshaller for uber/zap
- Given a list of protogen.Files, how do I parse protobuf options and their values
Unfortunately the generated code is more often than not incorrect or uses non-existing API methods, but can give an idea of methods to use or where to look in the official api docs.
So far only using the free version and experience is good. I might buy the plus subscription but so far lacking any hard requirements. I particularly hope it gets better teaching me system programming as sometimes it BS.
Can we use this chat to translate between English and xyz? If I enter an English word or phrase, please translate it to xyz. If I enter a word or phrase in xyz, please translate it to English. If my spelling is incorrect, please attempt to correct it, offer your suggested spelling, and then proceed to translate.
I'm also working on a UX for AI to make people more productive when creating things: https://inventai.xyz. Subscriptions are nearly done, which are needed to actually generate content.
* Getting started with a new platform. For example, describing the problem, and having it create a template in a front end framework, CSS framework, API generator.
* Creating instructional materials. Pasting in code and generating explanations, assessments, and grading rubrics. [2]
* Generating the first pass of API documentation, READMEs, test suites, and configuration files. Modifying configuration files. Finding configuration options based on NL descriptions.
* Quickly generating examples of API uses that are specific to my application. Finding out what libraries and APIs are available for a use case, based on an NL description.
* Learning what algorithms exist for a problem. Generating implementations of these in different languages, or that are specific to my code or data structures.
* Rarely-used system administrations commands. For example, how do I flush the DNS cache on macOS Safari and Chrome? (Questions such as this are actually better on Perplexity.ai than on ChatGPT.)
* Pasting in error messages or descriptions of problems, and asking for solutions.
* Tie-breaker questions about what to name a file, function, or set of functions.
In general, I find that it takes a lot of the drudgery out of programming. (Similar to Copilot, but for a different, generally more macro, set of areas.) For example, I asked it to solve a geometry problem and generate a test harness for both interactively and batch testing it. It's solution to the problem itself was a non-starter, but the test harness was great and would have been involved boring work in order to write.
I also use it to generate emails, project proposals, feedback, etc. I don't think it's ever come up with anything usable, but seeing what's wrong with its attempt is an easier way for me to get started than looking at a blank page or searching for examples of the writing form on the web are.
[1] https://notes.osteele.com/gpt-experiments/using-chatgpt-to-p... [2] https://notes.osteele.com/gpt-experiments/chatgpt-code-expla... [3] https://notes.osteele.com/gpt-experiments/discussing-an-ardu...
Do I always get stuff that I can apply? No, not really. But given that discoverability can be low for things like that, it's usually helpful at finding me things to, at the very least, look into.
-Created a jupyter extension for code completion, auto commenting, and an error handling assistant. -Automated my email to auto draft responses to important emails. -Automated my email to auto summarize important received emails. -I've manually used it to create lots of documents including correspondence, marketing material, and code generation.
A month or so ago, I tried asking it some really specific questions about the Linux kernel and it did not generate anything useful. I assume it must have gotten a lot better now with the larger model.
But the drawback is... it enables lazyness for the sake of "productivity", developer quality might significantly drop by spending less time doin propper research on their subject, and on top of that it's output is not necessarily correct but many will find it reliable again out of lazyness or ignorance.
Also, I'm not willing to register with OpenAI, let them keep my interaction records etc.
- copy paste (emails for reservations) or type an event or multiple events and ask to convert it into ‘ical’ format. Copy output and save into file, import into you calendar
- convert natural language into json , yaml or other structured text with custom fields
I'm now looking into other areas where it can help me automate easy but tedious tasks.
Some people still doubt its usefulness. I don't.
"How can I use denoising diffusion with this approach?" etc.
It's great to be in a critical mindset, because being creative and critical at the same time is much harder than being critical.
I use the socratic method and really dig in with it.
So many use cases overlap, there’s potential for improvements but Siri still struggles to understand context and gpt has no way to access my calendar (at least in a way that I’d like). I guess this will change fast?
Implement the following repository interface against sqlite in Golang. The method receivers should be defined on a struct called “repo”.
The code used prepared statements and worked out of the box. I wrote unit tests to verify.
I think it might be good at answering "why" questions since Google completely gave up on that.
I have used it to help come up with lesson plans for various topics. In general the lesson plan sucks, but it may contain 1-2 things that I forgot to cover.
Also vice versa: "create a JSON document with sample data from this java model class: "public class Person {String name; ...}"
2. Asked it to convert many curl requests with funky headers to Elixir code using Req/Httpoison.
3. Massaged data from weird structures to other weirder more custom structures.
lots of other things. it's been a real boon to my productivity.
I need to often give some examples from other tests. I need to make many edits, but it spits out enough to have a basic boilerplate-ish template and speeds up my work by at least ~25%.
I give it a function, tell it to write a test and it largely gets it right most of the time. I have to tweak it but the time spent is a lot less than if I did it myself
Lots of people talk about generating code with Chat GPT, but to me its real value is in having deep detailed discussions about design problems.
It’s been so successful at this that recently I gave GPT-4 the full interview design skill assessment that I give to engineers when I interview them at Microsoft. GPT 3 wouldn’t be able to handle this, but what GPT-4 did here astonished me. My assessment is that this is a principal level performance. It didn’t have to do other things that normal candidates have to do but for this raw skills assessment of design skills it was spot on. It would have impressed me even more if it figured its final solution out from from the beginning, but that’s what it gets for blurting things out before thinking about them, which people do, too.
The important thing I’m communicating here is not that I am impressed because it’s amazing that a computer can do this stuff; I’m impressed at what it has done here compared to almost every human I’ve ever walked through this question with. The approach I used here is the same I use when asking candidates this question, because in addition to testing their ability to code, I’m looking for how well I can understand the candidate’s ideas, and how well they can understand and then apply my own ideas when I ask them to take a different approach on certain things than they were thinking of themselves. This is one area where many great coders struggle; they can code like the wind when it’s their own idea but struggle to work collaboratively. This kind of mental flexibility, ability to think of things in a different sequence or consider other ways to solve the problem after thinking of their own solution is also a required skill often in real-life meetings and other collaborative settings. I’d rate this candidate as an outstanding, top-notch collaborator.
My point here is that using it only to write boilerplate code is a waste of its best value. My suggestion is to get GPT-4 with its larger token limit, and talk strategy with it. Tell it all about your biggest challenges at a level of detail that would exhaust a normal human, and talk through ideas of how to improve your world.
Talk to it about your people problems too. It’s an astonishingly wise counselor who has a wealth of positive insights and suggestions. It’s also great for elegantly wordsmithing things.
Don’t miss out on the chance to collaborate with this endlessly creative and endlessly patient collaborator.
Since then I've asked some about general knowledge, history, religion, geography, politics, other topics of interest. Mostly in English, but some in Portuguese and a little in Spanish. It's extremely good in all three languages.
Mostly though I've been asking about random work topics that come up every day. We use a lot of lots of systems, tools at work, and I need to write software to handle diverse areas. ChatGPT cuts right to what I need as far as: (a) general knowledge of tools and what their purpose is; (b) surveys of categories of tools, comparisons between competing offerings; (c) specifics on how to use, configure, program against various tools, query data, change things; (d) questions on best practices and pitfalls. This is mostly in context of macOS, Linux, AWS, kubernetes, observability tools, and APIs for lots of DevOps-related systems. I do lots of coding in Python, I also do a lot of ad-hoc diagnosis of situations. (We have a great DevOps team that manages infrastructure with standard DevOps tools -- my job is to build what those tools don't address so well, and also to help build out future data-engineering efforts.)
I'd say my use of Google search to find relevant articles / pages has gone down 70%. One small example today, I wanted to use `jq` to process some `docker ... --format json` output to pull out some data. I don't want to learn the ins and outs of `jq`, I described my problem and it gave me a good template I could adapt.
Any time now in scrum or other meetings, if there's any question about something, we often just consult ChatGPT during our Zoom/screen-share sessions. I think generally I have a better sense on how to structure questions and question progressions to get quick answers than some others.
I've also found ChatGPT makes up stuff sometimes ... but it's usually close enough.
One comfort I have is that, at least for now, ChatGPT can't direct the overall organization of code for the many situations I need to address, so I'll have a job for a while. It does though fill in the knowledge gap at the edges, I don't waste near as much time searching for and reading documentation and examples. ChatGPT usually has good ready-made low-level examples when I need them, and high-level descriptions of tradeoffs and best practices.
I'm "committed" to Jetbrains tools, been using them for a long time. Today I began wondering what I might be missing from CoPilot, downloaded the CoPilot plugin for PyCharm (would also work for IDEA, DataGrip, etc.). I couldn't get the CoPilot plugin to log into Github and saw that others have had similar problems in the past ... so I can't use CoPilot yet. Maybe in a week or two I'll have a basis for comparison. (I don't want to switch to VSCode.)
EDIT: word choice, minor clarifications
It required some small edits but saved me 3 hours of writing.
There are so many other examples but I truly never reach to google first anymore.
I often use it to just gain ideas when I'm stuck with any situation, eg: "How to deal with non-responsive stakeholders?"
Here's an example regarding grounded characters sticking to walls:
prompt: Thanks! I added walls and ground to my game, as well as detection so that if the player touches the ground, they will be in a grounded state. However, when the touch the vertical face of the ground/wall, they stick to it. Can you tell me what I'm doing wrong?
response: It seems like the issue you're encountering is related to the physics and collision detection in your game. The player is likely detecting the wall as the ground, and it's causing the grounded state to be triggered when it shouldn't be. To fix this, you can modify your ground detection code to take the angle of the surface into account.
Here's a solution using the Physics.Raycast method:
Assuming you have a LayerMask called groundLayer for ground detection, make sure the ground and walls are assigned to the appropriate layers in the Unity Editor
[...further details on using angle detection on collision, as well as accurate sample code]
I can even ask it to give me the reference for the _Physics.Raycast_ method:
response: Certainly! Here's the official Unity documentation for Physics.Raycast:
[URL to official docs filtered out by HN]
Physics.Raycast - Unity Manual The Physics.Raycast method allows you to perform raycasts in your scene to detect collisions with 3D objects.
[...further information on how to use it, including an accurate code example pulled from the docs]
This just drastically reduces time-to-solution, especially when you consider you can lazily leave out information and still get a good response.
In short, I treat it as though each session is a fine-tuned search engine and get a lot of value from it, but it doesn't completely replace forums or Youtube videos, for one it can't argue with itself to provide conflicting but valid alternate solutions - I'd have to argue with it/ask it to produce those.
GPT: I speak the road plausibly travelled.
- Writing SOPs
- write Email
- Tone Analysis
- Recommendation Engine
# Programming and coding assistance
-To improve code by making it simpler or reducing duplication.
-To generate code for straightforward tasks with clear-cut objectives.
-Writing code in a language that one is not familiar with.
-To get help with design patterns in software development.
-Spotting libraries, methods, or alternative ways of doing work that people usually do but in a different fashion.
-Writing code or fixing bugs in a specific algorithm, method, or SQL query.
-Automating email to draft responses and summarize important received emails
-To generate test cases, identify performance issues or bugs in code, and convert layouts from Android XML to Compose.
-Help with writing unit tests by providing basic boilerplate templates and speeding up the work by 25%.
-Building bindings for a library, creating simple schemas for a microservice, and solving a one-to-many relationship problem.
-Using OpenAI as a virtual assistant to set reminders or access calendars.
-Writing code: OpenAI's GPT can provide suggestions on APIs or variables to use, saving users time researching and helping them write more efficiently.
-Writing commit messages: Some users use GPT to generate commit messages for their Git repositories, saving time and mental energy.
-Learning programming languages or technologies, such as PowerShell, by using OpenAI to create initial solutions that users can refine and iterate upon.
-Generate complex queries or configuration files
# Automation and efficiency in day-to-day tasks:
-Summarizing and finding answers to specific questions on various topics, including tax questions, recipes, and movie suggestions.
-Transforming data, such as changing raw citations into bibtex entries, fixing spelling mistakes, or converting CSV files into JSON.
-Outsourcing corporate emails to ChatGPT to convert them quickly and easily.
-Document search and learning new systems.
-Converting code: using OpenAI to convert code from one language to another, like from bash to Golang.
-Model classes and JSON: using OpenAI to create a Java class using Lombok from JSON and create JSON from a Java model class.
-Writing specifications: using OpenAI to write the rest of a specification after writing some part of it, saving time.
-Generating boilerplate ADS docs for detection content, converting rules between various query formats, identifying and normalizing security data, and brainstorming how to approach novel detection use cases in the cybersecurity field.
# Language learning and translation:
-To learn a foreign language by getting errors corrected and grammar concepts explained.
-Asking questions to improve writing, better understand concepts.
-Language learning: using OpenAI to learn conventions in a programming language one is not familiar with.
# Creative writing and brainstorming:
-Generating plausible scenarios for various training programs or creating standard terms of service for an app.
-Generating ideas for creative tasks, such as brainstorming, writing, and lesson planning.
-To lower the emotional-resistance barrier to doing creative tasks and improve the quality of the output.
-Creating crossword puzzles and word-finding puzzles.
-Creating lesson plans for various topics.
-To workshop different ML ideas by combining different ideas from the literature.
-To do bizarre linguistic experiments with writing prompts.
-Creative naming: using OpenAI as a creative partner to help with naming things like a data warehouse.
-Generating names for projects or classes and debugging help.
-Write SOPs, write emails, and analyze tone.
-Aid in the writing process, including transforming thoughts into presentable versions.
-Writing contracts and agreements.
- SQL queries
- Bash scripts
- Specific code snippets, often for APIs
- Explaining code snippets
- Google Apps Scripts
- Pub Quiz questions (not very successful but some are ok)
I will probably start using ChatGPT again now that theyre adding plugin support
On larger tasks, I've not found it particularly useful, although I haven't had a chance to try it out with GPT-4. Previously, when I would ask ChatGPT about solving a particular problem, it would be terribly broken. Maybe GPT-4 is better.
That said... even though the code was broken, it was helpful in that it gave me a skeleton of what a solution would look like, especially if it was a problem domain I had no experience in.
For example, I wanted to do a little project to extract text from PDFs, including PDFs that were basically image scans, so I would have to do some kind of OCR. I'd never done anything like this before. I'm sure I could spend time Googling and figuring out which libraries to use. But instead I asked ChatGPT.
The solution it gave wasn't great, but more importantly it pointed me in the right direction with the libraries it used and some examples on how to use it.
Aside from programming, I've also used it as a "study buddy" since I'm going back to school and working on my masters in Computer Science. That's been much more successful. For example, I will give it questions from study materials handed out by the instructor (like previous exams or quizzes) and say "We are reviewing paper X in this class. Here's questions from a previous exam. Please generate questions like this to help me prepare for my upcoming exam."
or "Here are questions from a previous exam and my answers. Please evaluate my answers and provide feedback."
or "Here are questions from a previous exam, please quiz me in a similar format"
Also working on projects for class, while I won't ask it to solve the problem for me, sometimes I'll bounce ideas off of it. Like... "I know there's an algorithm to do X, but I don't know the name of it. I don't want you to write the algorithm for me, because that's cheating, but please tell me what the algorithm is called and if possible point me to a good paper describing it."
Lastly, I recently used it while helping someone update their resume (with permission). I removed all personal information and asked ChatGPT-4 to help me make it pop. We had a little back and forth conversation on ways we could improve the resume, and when we were done it was pretty damn amazing. I'm pretty good at doing resumes, but me + ChatGPT was better than me alone.
Apparently it did a bangup job, because every interviewer went on and on about how good the resume was and how impressed they were.
It's surprisingly good at remembering earlier details. Occassionally it will forget things but another prompt tends to fix that.
For example I'm about to plan Chapter 4. The broad story so far is that the adventurers are doing some tasks for a friendly NPC and during that will slowly uncover clues to the BBEG, in this case a cabal of vampires who plan to plunge the world into darkness.
Prompting it with:
Let's plan Chapter 4. Can you produce a short table of suggested titles and themes?
It produced: Title Theme
The Vampire's Lair The adventurers infiltrate the vampire lord's castle and uncover his dark secrets.
A Deadly Dance The adventurers must navigate a treacherous ballroom filled with traps and minions.
Into the Abyss The adventurers descend into the catacombs beneath the castle and face unspeakable horrors.
The Final Showdown The adventurers confront the vampire lord and his minions in a dramatic battle for the fate of the realm.
Beneath the Moonlight The adventurers discover a hidden shrine dedicated to the vampire lord and must unravel its mysteries.
Blood Magic The adventurers face off against powerful necromancers and witches who serve the vampire lord.
The Betrayal The adventurers must confront a trusted ally who has been working with the vampire lord all along.
In table form.Now it's got a little confused, because in chapter 3 the gang already infiltrated the castle and confronted the vampire lord, but another prompt will fix that.
Prompting:
In Chapter 3, the adventurers already infiltrated the castle and dealt with the vampire lord. Can you reproduce that table assuming the vampire lord is no longer a threat for now (even if he remains at large)?
Produces: Sure, here are some possible titles and themes for Chapter 4, assuming that the vampire lord has been defeated (at least for now):
Title Theme
A Web of Deceit The adventurers uncover a plot by the cabal to take control of a powerful artifact or location.
The Hunter Hunted The adventurers are pursued by an assassin or bounty hunter hired by the cabal.
The Fallen Hero The adventurers discover that a beloved hero or figurehead is secretly working with the cabal.
An Ancient Evil The adventurers stumble upon an ancient artifact or tomb that holds the key to defeating the cabal.
The Betrayer's Gambit The adventurers are double-crossed by a trusted ally who has been secretly working with the cabal.
A Race Against Time The adventurers must complete a series of challenges or tasks before the cabal can enact their plans.
The Final Confrontation The adventurers engage in a final showdown with the cabal's leader or their most powerful agent.
(again in table form)Which are great suggestions, now I can pick one and ask it to write the next chapter based on that, at which point we can flesh out more details.