2. Opinionated - not necessarily bad, and can be tuned.
3. Politically biased - that's a bullshit criterion. You want a tool that's reasonably objective - being in the middle between the two current political extremes is different from being objective.
4. Doesn't learn new skills - it does.
5. Controlled by a small group of people - that's why there are other implementations of LLMs.
I see a skill as learning how to do something where that learning translates to many similar activities. For example, learning how to integrate two systems using a REST API into your application is a skill - dealing with errors, paging of responses, parsing of JSON, etc. If you come along another API that isn't REST, maybe its some message queue with JSON, you've learned the skill of how to integrate and that translates pretty well. If a LLM/GPT is fed text input on the code needed for a client side REST API in language X, that's it - it won't be able to output the code needed for the client side of consuming a message queue unless it is fed that particular input. If its never heard of a message queue you're going to get paragraphs of text that equate to a blank stare. I think you're right that it doesn't learn skills because it doesn't have skills much like any computer doesn't have skills.
I think the trap for humanity is the potential fraudulent things we'll see come to light that happen during a hype cycle. Some investors are likely to trap their money into projects/companies that don't pan out or are completely useless.
Is it? I mean it only cost a few million to train and it's cheap enough to provide free and open access to... It's quite cheap compared to how much some software companies will spend building a product and for how long they'll operate at a loss.
I don't really know what you mean by a "trap" though. ChatGPT is a fairly good tool if used right. Given its dubious level of accuracy a the moment I doubt anyone is using it to form political opinions, but maybe I'm being naive.
The real risk I see would be future AIs which are reliable enough to trust, but have unknown goals both due to AI alignment challenges and their secretive training and fine tuning by corporations.
Although I'd argue this is just another iteration of the same problem we have with researching things using tools like Google and Wikipedia today. Google clearly bias certain results, but for the average query they are a reliable resource which gives a false sense of trust. Similarly, Wikipedia's academic articles are generally of a very high quality which leads some people to falsely trust the more opinionated articles.
ChatGPT is just one of many tech products which will be used to influence people. A trap would imply we don't already know this.
Seeing those make me think less about quality of HN community :(
All our written wisdom is being devoured by computing. This throughline started a long time ago. We will be the last generation to remember the pre-internet time when access to knowledge was a huge advantage in life.
The past was better for the spiritual types. The lust for data, is becoming a full blown fact-seeking exhaustion soon enough.
Is chatgpt a trap? Yes for people who want to oppose the data brokers. No for everyone else who turns the other cheek, on the great data suckage.
The real trap is the inability to have an original thought, develop it and profit on it in the marketplace. Instant and ubiquitous corporate/economic espionage enabled by computing has the power to stagnate the West permanently.