I built it just for myself. I think it's hilarious. Everyone else I've shown it to has been less impressed!
Edit: This is what it is currently showing: https://ibb.co/T2b4S4M
The modern AI era started in 2017 with the "Attention Is All You Need" paper (https://arxiv.org/abs/1706.03762). ChatGPT is a popular manifestation of something that has been making huge and significant progress (image recognition, language translation, image generation, etc.) since then.
This blog post has helped me the most in trying to understand what LLMs are, how they work, and what they might be capable of: "Prompting as searching through a space of vector programs" https://fchollet.substack.com/p/how-i-think-about-llm-prompt...
I wonder if the opposite may be true. With the advent of AI, will there actually be _less_ meaningless noise?
When I was in finance, we would regularly produce 100 page decks for client meetings. Usually only 2 or 3 pages of the 100 page deck would really matter. The rest was what I call "proof of work". _Look at how much work we did for you. Isn't it impressive_? With AI, that kind of proof of work no longer makes any sense, so maybe all those 100 page decks, marketing blog posts, investment memos, and white papers will slim down to only the salient points and in many cases vanish altogether.
I think this is a game changer for students and people in training.
The HN commentariat is not, of course, a randomly selected, representative group, and the discussion here can be repetitive. But, as a whole, it is much more insightful than any individual article, paper, or blog post.
In general I've just started following the actual researchers on social media to keep track of what they're saying or their perspectives on various issues. Go directly to the source.
> "We need another breakthrough. We can still push on large language models quite a lot, and we will do that," Altman said, noting that the peak of what LLMs can do is still far away.
> But he said that "within reason," pushing hard with language models won't result in AGI.
> "If superintelligence can't discover novel physics, I don't think it's a superintelligence. And teaching it to clone the behavior of humans and human text - I don't think that's going to get there," he said. "And so there's this question which has been debated in the field for a long time: what do we have to do in addition to a language model to make a system that can go discover new physics?"
But now the board coincidentally fired him, as you can read in the current top post on the front page.
0. https://www.thestreet.com/technology/openai-ceo-sam-altman-s...
https://www.newyorker.com/tech/annals-of-technology/chatgpt-...
I think he underrates ChatGPT and LLMs a little too much, but it's the best counterpoint to AI hype/doomerism I've read.
1. Prompt engineer - there's been a lot of talk about this, though I believe it's more extensive because businesses will need people to educate, manage prompt data stores, and assist with fine tuning.
2. Content management - as companies adopt AI with their own data, someone will need to manage the content going into the system including selection, privacy, and security.
3. Content Moderators - people who write/edit content will need to change their behavior about how the content is created and formatted, making it easier to ingest and lead to higher quality answers.
4. Content Creators - people who create content for the sole purpose of ingestion. This could be within a company, open-source/scientific research, or supporting vertical models.
5. Security Monitors - This is the person-in-the-middle who's watching/monitoring the system for privacy, safety, and security.
There are probably more, though this is what I'm thinking right now.
Nvidia's take on Minecraft remains the most interesting exploration of the capabilities of LLMs. In that research they had an LLM build a skill library of code which let it get achievements.
Visual novels are multimedia data with thorough plain-text annotations, and generative AI will greatly accelerate their development