I’ve was just reading a thread about how helpful ChatGPT has been for everyone and I’m wondering if I’m just living in a different world.
My recent projects: - Extracting web-assembly & obfuscated JavaScript from a web app and making it compatible with NodeJS and then refactoring it to get rid of web workers and use async. - Writing bindings for a C library for Java and using it to as a structure coordinate locator via the seed (minecraft) (main pain point was with maven and gradle) - Writing an email client with fyne in Golang to make my email more like a chat app
Sure they’re a bit niche but still feels like something OpenAI should have training data for.
- If it returns something not quite, it typically gives me clues of what terms to use to find what I need
- Doing repetitive coding tasks - It used to be much better about these but now, like others have said, it has gotten 'lazy' and tells you to fill in the rest
- For instance, having an API doc and an example method for how to integrate it, having it write out the rest
- One-off coding tasks that are simplistic and not fun to write - "Take this format of CSV and transform it in X way then write out to JSON"
- Rubberducking - If I have been banging my head against a wall, I'll explain to GPT what I am doing and it gives me suggestions for what to do in a numerical list, I'll run down them and then say something like "For #1, I got X, I think it might be Z. Can you dig a bit deeper on what you meant". I'd say that it steps me into a solution about 60 - 70% of the time and worst case, typically gets me going in a different direction
---My suggestion if you want to keep going down this exploration is to not treat GPT as a co-worker but more of an encyclopedic intern. Don't give it complex open-ended tasks. Don't make it think too much. You have to spoonfeed it what you know and what you need from it. You might be giving it too broad or complex of a task for it to be useful.
I've never tested it without but wrap code in triple backticks to make it very clear that it is code. I even triple quote API docs or any external document I am giving it.
I think the people who think GPT is worthless just don't want to learn how to deal with it. Personally I think that is a parallel with people who refuse(d) to learn how to use Google and prefer(red) manually looking through books for the answer.
Also just as cover-my-ass to anyone reading it: I follow the protocol of if you wouldn't post it to StackOverflow, you shouldn't send it to GPT.
For programming tasks, I use GPT-3.5 and 4 indirectly via Cursor and GitHub copilot (both Chat and inline suggestion features). It's hit and miss for this, but useful _enough_ to be worth it as long as I'm prepared to be vigilant and do a lot of handholding. It's more like it gets me a decent starting point generates decent comments and function signatures that save me some typing.
I don't use GPT a lot, but I have found it helpful when asking longer-form questions. I recently had some observations of the sun, that didn't gel with my mental model, and I asked ChatGPT a long form question and the answer was really helpful. On prompting for raw numbers I was directed to several Web sites and software.
Because my question involved multiple parameters (my location, time of year, time of day) the Google search wasn't helpful. The Chat one was easy.
I don't think it's a case of "one replaces the other". I think it's a case of another tool in the tool chest. I'll use both, depending on my question.
Incidentally think how useless Google is to a toddlers "why?" and how well say an audio assistant powered by Chat would handle it.
It felt amazing at first, but it keeps giving answers that are almost right but are actually wrong. Overall, it just does not provide a net benefit.
A few examples:
"Do we say take an internship or do an internship?"
"How to flatten an array of arrays in JS"
"Top 15 things to see in Milan, with Google Maps links"
a.) Searching a broad domain that I'm not familiar with. It finds concepts I simply would have missed if I were using Google.
b.) Asking dumb questions I'm too embarrased to ask a colleague as it often feels like everyone's familiar with the idea but myself.
c.) Understanding usage of math symbols. Often, authors use slightly different symbols in expressions, and that trips me up. Rather than guessing and hoping I'm interpreting symbols correctly, I ask ChatGPT. For this, I often just take a screenshot of the expression(s) I'm curious about, and ask ChatGPT to explain the expressions without providing other context. Then, if ChatGPT gets the domain and context correct, I feel confident that it's explanation of the terms is reasonably trustworthy.
d.) Similar to 'c', but more deeply, clarifying content in text books or online courses, where I can't or don't want to wait for an instructor or peer response. There are so many frustrating occurances in educational material where an author glosses over a concept as "being obvious" or they "leave it to the reader as an excessive to derive". Really chaps my ass - I bought the material to learn efficiently, not spend hours pulling my hair out deriving a concept that's been omitted. So, I ask chat GPT. I recently had an epit 30-minute exchange with ChatGPT-4 on ROC curves. There is actually an unspoken assumption that after plotting measured specificity and sensitivity values (between 0 and 1), you must artificially connect the last point to coordinate (1,1), even though a small subset of models will not actually measure a value of (1,1). The discussion with ChatGPT helped me tease out the logic and see that the reason this concept is ommited is because it's highly unlikely to have model with no (1,1) ROC value, but I did confirm my suspicion that something was indeed being omitted, if not actually misunderstood, by educators explaining the concept. Could just be me, buy that kind of thing will stick in my crawl for years if I let it go unresolved. Bottom line, I think chatGPT, in this sense, is a good sounding board for reflecting how most everyone in the world "thinks" about a concept (especially very niche ones - not that ROC is niche) and you can indeed have a logical argument with chatGPT that is useful.
I expect these will only improve in time, but it's helping me right now.
I will say that writing prompts comprehensively in proper syntax and grammer/punctuation seems to yield considerably better results. Perhaps because a lot of the information archived in literature and white papers is written that way and chatGPT is effectively just an echo chamber for archived material. It seems like people try to use cave man-like "Google Search English" or chat/text slang, results are incomplete or off the mark. I speak English, so I'm not sure if this observation extends to other languages.