HACKER Q&A
📣 ChaitanyaSai

What are the most interesting takes on Generative AI you've come across?


We are tens of months into what looks like the AI age. (If you disagree, I'd love to see interesting takes on why that is). It is too early to tell how the landscape will evolve, because the landscape is vast and we do not know what parts are going to get terraformed. Would love to hear about interesting takes/predictions/uses that go beyond the usual breathless twitter/x listicles. Please do share!


  👤 LeoPanthera Accepted Answer ✓
I have an LCD picture frame into which Stable Diffusion generates a landscape photo, with the twist that the weather in the landscape is the actual current weather outside.

I built it just for myself. I think it's hilarious. Everyone else I've shown it to has been less impressed!

Edit: This is what it is currently showing: https://ibb.co/T2b4S4M


👤 xnx
> We are tens of months into what looks like the AI age.

The modern AI era started in 2017 with the "Attention Is All You Need" paper (https://arxiv.org/abs/1706.03762). ChatGPT is a popular manifestation of something that has been making huge and significant progress (image recognition, language translation, image generation, etc.) since then.

This blog post has helped me the most in trying to understand what LLMs are, how they work, and what they might be capable of: "Prompting as searching through a space of vector programs" https://fchollet.substack.com/p/how-i-think-about-llm-prompt...


👤 mmckelvy
One of the frequently raised concerns is that soon we will be drowning in AI generated blog posts, articles, presentations and emails, and most of this AI generated content will be meaningless noise.

I wonder if the opposite may be true. With the advent of AI, will there actually be _less_ meaningless noise?

When I was in finance, we would regularly produce 100 page decks for client meetings. Usually only 2 or 3 pages of the 100 page deck would really matter. The rest was what I call "proof of work". _Look at how much work we did for you. Isn't it impressive_? With AI, that kind of proof of work no longer makes any sense, so maybe all those 100 page decks, marketing blog posts, investment memos, and white papers will slim down to only the salient points and in many cases vanish altogether.


👤 WoodenChair
I think one of the least headlined, although perhaps not by definition interesting takes, is just one of moderation. This is not AGI (in fact nowhere even near close to it), nor going to unemploy quite as many people as the doomers predict, yet it’s also more than just a stochastic parrot as the naysayers put it. It’s a great new set of productivity tools with zero true existential threats except to some specific creative job categories (marketing copywriter, summarizer, etc).

👤 markusde
How's this for interesting: many people in my field (formal methods) seem to be pretty excited about our job prospects. Before we used to just say that people don't actually know what their code does, but now it looks like it might really be true.

👤 RecycledEle
For those of us who have long commutes to and from work, VoiceGPT is amazing. I can talk through a problem I've been trying to solve with a tutor who helps me understand it. Meanwhile, the guy in the truck next to me is road raging and about to run over a slow Volkswagen. I prefer my deliberative talks.

I think this is a game changer for students and people in training.


👤 tkgally
For me, the most valuable "take" on generative AI has been the collective discussion on HN. While many individual articles and comments have been useful on their own, it is the ongoing conversation here that has given me the best view of how AI is starting to impact society and what people think about it.

The HN commentariat is not, of course, a randomly selected, representative group, and the discussion here can be repetitive. But, as a whole, it is much more insightful than any individual article, paper, or blog post.


👤 TheAceOfHearts
Yan LeCun is generally advocating for open models and regulation at the product level which is very closely aligned with my own ideological beliefs so I keep track of everything he shares on social media. There's too many doomers sucking up all the air in online discourse, IMO.

In general I've just started following the actual researchers on social media to keep track of what they're saying or their perspectives on various issues. Go directly to the source.


👤 gremgoth
That symbolic AI (vs machine learning) can also be generative (for example, using model completion algorithms to generate new information during ETL/data warehousing cf https://silmarils.tech/ https://www.categoricaldata.net/)

👤 sva_
Just read this interesting article[0] about Sam Altmann:

> "We need another breakthrough. We can still push on large language models quite a lot, and we will do that," Altman said, noting that the peak of what LLMs can do is still far away.

> But he said that "within reason," pushing hard with language models won't result in AGI.

> "If superintelligence can't discover novel physics, I don't think it's a superintelligence. And teaching it to clone the behavior of humans and human text - I don't think that's going to get there," he said. "And so there's this question which has been debated in the field for a long time: what do we have to do in addition to a language model to make a system that can go discover new physics?"

But now the board coincidentally fired him, as you can read in the current top post on the front page.

0. https://www.thestreet.com/technology/openai-ceo-sam-altman-s...


👤 allemagne
Ted Chiang's piece here is great:

https://www.newyorker.com/tech/annals-of-technology/chatgpt-...

I think he underrates ChatGPT and LLMs a little too much, but it's the best counterpoint to AI hype/doomerism I've read.


👤 JoeMayoBot
I'm thinking about the new jobs that are/will be created.

1. Prompt engineer - there's been a lot of talk about this, though I believe it's more extensive because businesses will need people to educate, manage prompt data stores, and assist with fine tuning.

2. Content management - as companies adopt AI with their own data, someone will need to manage the content going into the system including selection, privacy, and security.

3. Content Moderators - people who write/edit content will need to change their behavior about how the content is created and formatted, making it easier to ingest and lead to higher quality answers.

4. Content Creators - people who create content for the sole purpose of ingestion. This could be within a company, open-source/scientific research, or supporting vertical models.

5. Security Monitors - This is the person-in-the-middle who's watching/monitoring the system for privacy, safety, and security.

There are probably more, though this is what I'm thinking right now.


👤 robg
Far from an AI century, we still don’t understand how brains create and drive intelligence. The more we learn about actual intelligence from brains, the more artificial versions will approximate what we’ve evolved to do. Modern neuroscience is younger than neural networks.

👤 pruthvishetty
https://www.wired.com/story/generative-ai-chatgpt-is-coming-... We helped our sales teams cut RFP response times from weeks (involving multiple folks) to minutes. They still review all the answers and handle edge cases, but it's been a huge value addition allowing them to spend more time engaging with customers.

👤 ganzuul
We got here by upscaling compute. To me it feels like we need another 100x or 1000x upscale to get to the stage where AI can actually begin to think about making actionable plans. To get there we need to address the efficiency problem.

Nvidia's take on Minecraft remains the most interesting exploration of the capabilities of LLMs. In that research they had an LLM build a skill library of code which let it get achievements.


👤 toddmeck
Too few people are investigating how to determine whether or not Gen AI provides good answers which improve efficiency. Many people are giving examples of where GenAI is fast, but requires more time and effort to investigate the "rabbit holes" and misleading solutions. So, the net result is not efficiency or cost-savings, but failure demand shifting costs elsewhere.

👤 woctordho
Shameless plug: We're building a dataset of visual novels for AI training https://huggingface.co/Synthia/ChatGalRWKV

Visual novels are multimedia data with thorough plain-text annotations, and generative AI will greatly accelerate their development


👤 apstls
I've been wondering if there's a chance the inevitable explosion of hyper-realistic disinformation and manipulation content - of course brought on by genai significantly reducing the cost and barrier to entry to very high-volume realistic multimedia content production - could make the public digital information landscape so obviously polluted and cacophonous that even the most oblivious of media consumers will begin to have their trust of information from purely online sources (or at least social media) continuously erode, basically solving the problem of online disinformation efforts by destroying the confidence in the medium altogether.


👤 jonahbenton
Nice try chatgpt