I've found ChatGPT amazingly helpful in rewording goals and in general anything I am writing in the document for guidance.
Yes, it takes some rewriting on my part as well, but overall the net result is much better than just me writing alone.
What has everyone else used it for?
I used it for journaling practice. It didn't get at a quality result as fast as my own tools here do, but I can see why others would find it helpful. Particularly if they are new to journaling and somewhat directionless in that context.
I used it to compare my written coaching services, as I used to offer email-based coaching. This result with ChatGPT wasn't very good. At a minimum it is not familiar with ongoing qualitative evaluation of the communicative relationship, and aspects like identifying & working around blind spots, but more experiments here would probably be needed to really get at a good explanation of the effect.
I can see why it's useful for you, given what you wrote.
Overall I'd say it's very similar to my experiences in making art with fractals [1]. If you want to poke the various software packages, interact a bit, and kind of sit back and observe the results...probably so-so outcomes. If you are willing/able to develop & provide subjectively qualified input on the results though, the outcomes will probably be much better & much more useful.
But you should ideally have an outcome standard in mind, even if you are open to whatever the outcome might be.
I do have some privacy concerns and would like to know more about the privacy aspect, plus the aspect of getting a good, above-board dataset with regard to input sourcing, and using that dataset offline.
Fun world we live in though, hope to see more upgrades & uses too.
1. Shameless! https://www.instagram.com/marcolas/
Instead, I've been basically describing the artist to ChatGPT and telling it to restate or rephrase what I said as if it were a professional bio. Results were not significantly better than what I would casually write on my own. I noticed a lot of repetition in the results too. I can't blame it for having limited information to work with, but it is interesting to see it work at least
There is another post about another language model using RNN so i had chaptgpt have a look at it:
just by showing it the 3 source files:
> The first code file defines the architecture of the model, including the custom operations and classes used to handle different float precision modes and device placement. The second code file sets the hyperparameter settings and loads the pre-trained model's weights. The third code file defines several utility functions and classes to tokenize input text, refine and process text input, and sample the next token in a sequence given the current context.
>The overall goal of the project is to generate text by training the model on a large dataset of text, the model can be fine-tuned on a specific task, such as language translation, text summarization, or text completion. The model can also be used for generating text in a conversational context, such as chatbots or virtual assistants, where the model generates text based on the current context, and the user's previous inputs.
and
> I can't determine if the code achieves the claims of the project, as the model has not been trained and run on any specific dataset and it is not clear what kind of performance results the developers were able to achieve with it. However, the approach used in the code appears to be logically sound. The code implements a standard transformer-based language model architecture, but with a custom implementation of the operations in order to handle different float precision modes and device placement, which is a common technique to optimize the performance of deep learning models. The code also includes some utility functions and classes that handle tokenization, text processing, and sampling of the model's output, which are also common in large-scale language model implementations.
> It would be important to run the model with a large dataset of text, fine-tune it on a specific task and compare its performance with other state of the art models, to measure its quality and scaling.
if you continue this road it can teach you the basics of machine learning really fast and can even get you started with a vs code dev container for it.
https://questgiver.org/t/05c0a7d7-5f22-41d2-90aa-f02082fbb8d...
These were entirely generated based on an intro prompt + “Make three quests people could do to help ensure ethical AI development”. It generated a json with objective types, emojis, everything.
We’re planning to use it to get past the “blank page” problem when onboarding organizations.
It's also helped in rewriting wordy text into something easier to understand, but there are better tools for this (e.g. QuillBot).
For actual coding, the core OpenAI codex works a lot better, or running through Github Copilot.
Clarifying, explaining, and converting bash scripts to other languages.