HACKER Q&A
📣 sourcecodeplz

What questions do LMMs fail with?


Basically, what's your question that LLMs just don't "get"?

Are they:

- nonsensical

- meaningless

- jokes

- morality

Please do give some examples if you can.


  👤 meltyness Accepted Answer ✓
Character-level manipulation fails. I suspect this is because GPTs are trained on tokens rather than sequences of characters, means they're fundamentally dyslexic and can't really spell.

https://imgur.com/if92ZIQ

https://imgur.com/sYXFPRu


👤 adamquek
LLM can't do linguistic nuances like irony, humor and sarcasm. It also have difficulty with ambiguous statements and multiple negation. It may also have problem with technical jargon (though it work quite well with medical terms for some reason).

👤 mtmail
It keeps inventing features and command-line flags. A dozen times it presented me with code (Postgresql/PostGIS related) where it turns out a method cannot be called with those parameters.

https://blog.opencagedata.com/post/dont-believe-chatgpt "TL;DR ChatGPT claims we offer an API to turn a mobile phone number into the location of the phone. We do not."


👤 sharemywin
encoding="utf-8" for creating files. it always gets this wrong. I have to write it and when I asked it to make another change it will forget it.

👤 daniel-s
New ideas