I find that I can more intuitively ‘co-reason’ with LLMs, subconsciously emulating how they operate, so that I can more effectively steer their attention into the appropriate semantic subspaces to elicit more accurate responses.
Do others here, whether on the spectrum, with ADHD, or similar traits, also experience this? Do you find it easier to ‘pair’ with LLMs compared to neurotypicals?
Might attributes like our systematizing mindset and literal thinking be the enabling differentiators?
Going a step beyond: are we, in the grand timeline of civilization, uniquely evolved to fuse with this new type of intelligence?
The maintenance programmer in me is highly skeptical of things that almost work.
I've seen transcripts where people talk to ChatGPT and it seduces them and they get giddy and it is like watching a "meet cute" in a movie and you can see somebody having so much fun interacting with it that they don't really see that what it says doesn't really make sense.
That's not me.
I love chatting with chatbots about old sci-fi books by people like Smith, Anderson, Niven and such. I've only met two people in my life who are better at sci-fi chat than a good chatbot.
Lately I have been asking Copilot for help with maintenance work, it does amazingly well at explaining strange but highly repetitive code such as the stuff the Babel transpiler inserts into code (stuff that is probably all over the training set.) Sometimes it beats searching Google and Stack Overflow.
I am working on several "second brain" programs that use LLMs for classification, clustering and other analysis but not doing any generative stuff right now. I use one of these programs to pick out articles from an RSS feed, everything I post to HN was chosen by that system and then chosen by me twice. I am using another to look through a collection of 250,000 images and another copy of that software to look at about 400 notes a friend made in Evernote. I am hoping to merge these together in the future. These projects are very much about building something that works with the unusual way I think.
Chatbots will always say what you want them to say, in a tone that you like. They will always affirm that you're right.
In contrast, in a relationship with a real person, they won't always agree with you, take a tone that you like, or constantly affirm that your worldview is correct.
So I would assume you getting along well with chatbots is by design.
One can use AI/ML tools without the conversant component; someday, there may be more visual feedback mechanisms like spreadsheet models or graphs (both nodal and charts), not just text.
Or maybe code--latent compute--is another expression: untangling it takes work, but it's more worthwhile than just "reading the answer."
I guess it's like wanting to understand the "physics" of a model than just using the model.
On the other hand, I hope to usefully apply "How to Say It: Flashcards for Professional Communication." So there's that.
> Going a step beyond: are we, in the grand timeline of civilization, uniquely evolved to fuse with this new type of intelligence?
I've seen this notion claimed and sometimes phrased like above in ND circles and forums. It would imply that neurodivergent individuals are evolutionarily relevant, and hence have higher importance (superiority) over neurotypical individuals. That would be elitist. It can parallel arguments made by, say, in the grand timeline of civilization, a racist who could think that a particular skin color is superior or a sexist who could think that men are superior to women and that each has to fulfill specific roles.
To argue that something has evolved to develop certain features, there has to be evidence that the population was facing (possibly survival) difficulties in the environment being studied, that said features are observed increasingly in the population over time, and that those adaptations helped and improved the lives of that population. There is no such evidence to support the claim.
Neurodivergence is not new. It has been recorded throughout history in various ways—one example [0]. ADHD, ASD, and anything else that falls into the ND space has existed for a long time. It's only in recent history, as psychiatry and relevant practices grew and developed that neurodivergent conditions were reclassified repeatedly, to the point now where these conditions and behaviors are being seen as part of/under one umbrella of neurodivergence.
[0]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3000907/
EDIT: Few typos.
I do not feel I can "co-reason" better with a cleverer version of Dissociated Press at all, because Dissociated Press isn't reasoning. It's the neurotypicals who believe that an LLM actually thinks, while I'm on to what it's really doing: It's statistically predicting the next thing to say given what has been said before. I have better things to do with my time than to attempt to goad DP into producing some output I like based on rough guesses of how it operates. I'd much rather train and use the neural network between my ears to work on interesting problems. At least I have access to some of its internal state and it doesn't live in Microsoft's cloud.
I call it “rubber duck debugging for life on steroids.”