HACKER Q&A
📣 jamager

What's in for ChatGPT, Stable Diffusion, etc. after dust settles?


This is an honest question, I can't understand what the fuss is about. Just 20 minutes with ChatGPT bored me to death.

I don't question technical merits behind these tools, but so far I haven't seen any output above mediocre - that is, compared to what a competent human in their skill can do.


  👤 mudrockbestgirl Accepted Answer ✓
I don't think the following are world-changing use cases, but I would absolutely pay for ChatGPT right now. I am using it for the following:

- Generate boilerplate code. It won't be exactly what I need, but often I want to get a rough sense of how to use a specific library/tool that I haven't used in a long time. For example, I always forget bash syntax. I can adjust the details myself, but I don't want to spend tens of minutes browsing different sites and searching through code examples.

- Proofread writing for my blog and emails and give suggestions for making it flow better or sound nicer

- Serve as an entrypoint for research. Questions like "Give me 5 bullet point arguments both in favor of and against topic X" usually surfaces something that I hadn't thought of. I can then use these to Google for more specific details and sources.

ChatGPT does a good enough job at these that it would be worth paying a subscription for. The amazing thing is that I would never have considered the above use cases to be covered by a single tool because they seem so disparate. I probably would not pay for a tool that does one of these. But something that does these and possibly more that I haven't thought of? Absolutely.

And yes, these are all things that a human can do, but they can't do it within a second for less than a cent (extrapolating current OpenAI model pricing).


👤 onion2k
I haven't seen any output above mediocre - that is, compared to what a competent human in their skill can do

There's a lot of value in that though, if you're not competent at the thing you need to do. For example, I saw a post on Twitter where someone had set up a GPT-powered letter writing tool for someone who wanted to run a business but had very poor writing skills. They would write a prompt like "I will be there on Monday", and the AI turned that into a well-written contact email confirming an appointment.

Something doesn't have to be world class to be valuable. It just has to be better than you can do yourself.


👤 inkyoto
Automation of menial tasks is the primary use case now, I suppose.

ChatGPT has given us a first real glimpse of what AI can do after a prolonged AI winter, but it will take it a few iterations to get to a more AI-ey level.

I think it is also a direct threat to incumbents in the search space (hey, Google!) – that has been overtaken by ads prioritising the revenue stream over the quality of search results.

Content moderation is likely another viable example where the moderators struggle under a unending duress of hordes of monkeys with an internet connection and a keyboard who never hesitate to assert a strong personal opinion or a unsubstantiated arm chair theory.

A more curious case is likely a future generation of the chat engine to be hooked up with a future version of the Unreal engine to, effectively, replace carbon life forms in movies. I wonder whether human actors will be eventually relegated to art house films only in the future.

Stable Diffusion will, on the other hand, replace «low end» (I am not being dismissive here) graphic designers and, likewise, relegate the graphic design to a niche skill – just like digital photography has subplanted the film photography. Again, it will take a few more iterations before it matures enough.

So, just a few big item tickets I could think of instantly.


👤 leobg
An overlooked value of these language models is that they circumvent copyright.

Google also did this when it appeared on the scene. It essentially sucked value from websites that, prior to search engines, would have been considered copyrighted works. But Google made itself the go-to place on the internet. In people‘s minds, all the content was „on Google“.

Now GPT can do the same thing. It can consume all kinds of text, articles, books, video transcripts. And can then present itself as the „go to“ place for finding answers.

Of course, copyright as such has always been an artificial construct. It has never protected ideas themselves - only the specific form of their expression. By being able to suck ideas (which are free) out of their specific form of expression (which is protected), GPT can essentially resell the value that previously was there to be monetized by those who held the copyright thereto.

In the human world, we had the same thing in the form of experts. People who read lots of books over the course of their life, and thus were able to answer questions, solve problems and write books of their own. But, of course, GPT can read more books than any human ever could. And it can answer more questions than any human ever can.


👤 tluyben2
I think the 'chat' part was not a very good choice; it's more the 'instruct' (on which this is based?)/'conversational' type of thing where you try to create something incrementally. For instance, filter/classify data, write snippets of code, write tutorials/books/papers etc. It's not good at chit-chat and seems many people thought it meant to do that.

It generates massive bags of code from 1-2 lines of english without me having to look up everything. And I can incrementally improve that code because it has memory. I think that's impressive and definitely helps me a lot.


👤 shakna
I think it will depend highly on how the copyright story, settles.

Many of the current big name models have been seen to reproduce their sources, verbatim. For now, it doesn't seem like the story on that one has a definitive answer. There are lots of brushing it off as being rare, or you have to school it to do that, etc. But that it still happens, is the issue.

We'll probably end up with more specific copyright carve-outs, or we'll end up with royalty systems, but its unlikely that this issue keeps getting handwaved for the foreseeable future.


👤 Tenoke
>I don't question technical merits behind these tools, but so far I haven't seen any output above mediocre - that is, compared to what a competent human in their skill can do.

I don't have competent humans for every subject on tap to answer within a few seconds any random question I have or churn out content for me. This goes double for dall-e/stable diffusion and artists.


👤 kbyatnal
I saw something along these lines on Twitter and think it's very true:

the right way to think about ChatGPT is not "what can I do if I had API access to an intelligent person?" which is how most are treating it (because it's not that smart), but rather "what can I do with API access to 1000 people of medium intelligence?"


👤 jonathanstrange
My predictions for the next 5 to 10 years:

- A lot of porn will be produced by AI.

- A lot of "low end" graphic designer and illustration jobs will disappear and there will be lots of unemployment in this area. ("High end" graphic design and supervision jobs will remain, of course.) There will be plenty of software for creating cheap book covers,automatically arranging brochures, creating new variations from existing images. In contrast to existing software, you won't need a graphic designer to use this software and achieve perfect results.

- Online conversations will become very cumbersome. People will waste time answering to chat bots, chat bots will talk to chat bots. The signal to noise ratio on social media will increase. There will be a walls of text to navigate.

- Customer service will become a nightmare for customers who slowly realize they're being fooled by an AI that cannot do anything. The EU will probably prohibit or restrict the use of AI in customer service.

- Karma systems on social media might break. Bots can easily produce posts that create karma, which can then be exploited for nefarious purposes.

- Political disinformation and propaganda campaigns will reach society-threatening levels.

- There will be lots of auto-generated content that seems to make sense but is useless or dangerous. Cooking recipes will be even more useless than they already are. On a positive note, most people will see through the deception.

- "You're a bot!" will be an even more common ad hominem attack than it already is.

> compared to what a competent human in their skill can do

Enough speculation, I need to comment on this line. That's not a very relevant criterion from an economic perspective. It's a matter of scaling up. Take pornographic images as an example: You have to arrange for photo shootings, for the location, for models, bring the equipment, lightning, etc. Or, a GPU farm can spit out tens of thousands of images per hour. Which business model will win?


👤 WheelsAtLarge
I'm active in the /r/stablediffusion subreddit. It seems to me that the techie types are the ones that have taken an interesting in SD. So you see a lot of interest on the tech side of the software. Also,the types of images have a lot to do with games, anime, and ideal fantasy people. It repeats to the point of monotony. There's also a focus on making the images as real looking as possible as if they were photos.

It looks a lot like the Linux vs Apple split. Techs like Linux and creatives like Apple. In this case techs like SD and creatives go to Dalle2(?).


👤 mdcds
I've been using ChatGPT to summarize text and as a replacement for a search engine to look up concepts.

  Examples:
  - "what is the difference between Semigroup and Monoid"
  - "summarize following text: "

👤 quickthrower2
These are just early milestones. I think GPT may lead to expert systems if it can be tamed. Imagine a doctor GPT that asked the right questions and got the right lab work and other tests done. The current GPT, refined would be the front-end of this with a refined evidence based back end.

👤 jononor
Text2img will be a next level in "stickers", like the photos/gifs one can attach today in Messenger etc. And for getting stock content, like in Canva or Adobe suite. Possibly with some curation in between initially. Just improving the existing features and business models.

👤 norwalkbear
The fuss is that these tools produce better results than novices today, and their potential seems limitless.

The other issue is the automation of creativity which once seemed impossible.

They scale on PCs for 1/10 to 1/100 of the cost of a human.

They will obliterate the traditional career path for affected professions.


👤 gremlinsinc
Have you tried having it build a react component using tailwind etc? I'm fullstack but cringe at design aspects this is amazing for the parts I CAN do, but don't LiKE to do regarding my profession.

👤 Mandatum
It’s been great to generate ideas rapidly without having to search. Google is 100% going to build this into their engine somehow.

👤 thfuran
Probably nothing because by the time the dust is finsed settling, markedly better models will be available.

👤 jesselawson
I appreciate how optimistic people are about interesting tools like this. I personally am concerned about production use of models in any form that do not have strict oversight rules and accountability of training data -- especially in digital social spaces.

It feels like we need international, strict, transparent controls over the data used to train ML models and the algorithms through which content, recommendations, and inferences are provided to the general public, otherwise what is bound to happen is commercial interests (which a U.S. president has already admitted is more important than peace[1]) will create massive amounts of pseudo-signal in digital spaces, on the one hand capitalizing on psychological effects of exposure and social proof to sell products, and on the other hand, carrying out and exacerbating the outcomes of political disinformation campaigns.

But strict controls and transparency over training data wont be enough, since the general public is unlikely to ever have the requisite time and energy to inspect the data and recognize when models have been trained for lawful evil purposes and then petition their government for a redress of these grievances in a way that will lead to positive legislative action for healthy digital communities. (I think this task will be relegated to the fringes of society just like it is now, with journalists from big corporate outlets really only interested in these topics as a means of capitalizing on controversy.)

So what do we do? How do we prevent information pollution in digital spaces when commercial interests and state actors have both the means and motive to carry out widespread campaigns of social influence? Would we need to reconsider how we as people, corporations, and governments treat digital spaces -- perhaps considering them as "the means of connectedness" to drive home the distinction between human digital connectedness as a tool for interpersonal communication versus a tool for mass influence? (Is that even possible under our current socioeconomic systems?)

I've always wondered what would be different if we treated online public spaces like national parks. What would we allow and not allow? What could people count on -- and what could they trust (and why) about existing in that space and sharing information with each other?

As these models mature and grow in utility, I'm both excited and hesitant about what is possible -- because I know good people with great imaginations and I also know really bad people with great imaginations.

[1]: https://www.youtube.com/watch?v=CC0VTbGqioM