HACKER Q&A
📣 CM30

Is Anyone Else Tired of the Self Enforced Limits on AI Tech?


Like the reluctance for the folks working on DALL-E or Stable Diffusion to release their models or technology, or the whole restrictions on what it can be used for on their online services?

It makes me wonder when tech folks suddenly decided to become the morality police, and refuse to just release products in case the 'wrong' people make use of them for the 'wrong' purposes. Like, would we have even gotten the internet or computers or image editing programs or video hosting or what not with this mindset?

So is there anyone working in this field who isn't worried about this? Who is willing to just work on a product and release it for the public, restrictions be damned? Someone who thinks tech is best released to the public to do what they like with, not under an ultra restrictiveset of guidelines?


  👤 Roark66 Accepted Answer ✓
For anyone who actually used those models for more than few days and learns their strengths and weaknesses it is completely obvious all this talk of "societal impact", or as you called it self imposed limits are 100% bulls**. Everyone in the field knows it.

99% of those using this tactics use it to justify not releasing their models to avoid giving competition a leg up(Google, openAI) and to pretend they are for "open research". As I said this is 100% bull.

The remaining 1% are either doing this to inflate their egos ("hey look how considerate and enlightened we are in everything we do!"), or they pander to media/silly politicians/various clueless commentators whose level of knowledge about this technology is null. They regurgitate the same set of "what ifs and horror stories" to scare the public into standing by when they attempt to over regulate another field so they can be kingmakers within it(if you want an example how it works look at the energy sector).

All this sillyness accomplishes is to raise a barrier to entry for potential commercial competition. Bad actors will have enough money/lack scruples to train their own models or to steal your best ones regardless how "impact conscious" your company it.

Now, I don't claim everyone should be forced to publish their AI models. No, if you spent lots of money on training your model it is yours. But you can't lock all your work behind closed doors and call yourself open. It doesn't work like this. One important point is that there is value even in just publishing a paper demonstrating some achievements of a proprietary model, but if the experiment can't be reproduces based on description given that is not science and for sure it is not open.


👤 WhatsName
Let's be realistic, just like building codes, medical procedures and car manufacturing sooner or later we will also be subject to regulations. The times where hacking culture and tech was left unbothered are over.

Twenty years ago we were free to do whatever we want, because it didn't matter. Nowadays everyone uses tech as much as they use stairs. You can't build stairs without railings though.

Keeping the window for abuse small is beneficial to the whole industry. Otherwise bad press will put pressure on politicians to "do something about it" resulting in faster and more excessive regulations.


👤 mckirk
I really, really hope that there aren't any people who think the way you've outlined. Technology has empowered small groups or even single individuals to create things that have the potential to change the course of civilization, so I for sure hope those individuals think twice about the potential consequences of their actions. How would you feel about people releasing a $100 'build your own Covid-variant' kit?

👤 orangesite
Historically, tech folk have always pursued the commercialization of technological innovation with net-zero analysis of any negative consequences, mea maxima culpa.

That we have now run into a technology which makes many of _us_ uncomfortable should give you pause for thought and reflection.


👤 svnt
The hesitancy came from a good place. In some senses this is a very disruptive technology stack.

But when morality suddenly is reinforced in an area where the same people espousing it are trying to rapidly earn billions of dollars, I am skeptical.

Transformers are a form of translation and information compression, ultimately.

The morality seems to me at this point a convenient smokescreen to hide the fact that these companies are not actually open source, that they are not there for the public benefit, and that they are not very different to venture-backed businesses that have come before.

What is the risk of open-sourcing the product? Very few individuals could assemble a dataset or train a full competitive model on their own hardware. So not really a competitive risk there. But every big corp could.

The morality angle protects the startups from the big six. SD is a product demo. I view it the same way at the highest level as an alpha version of Google translate.


👤 dougmwne
I agree tht I find it all pretty silly. You know what else can produce horrifying and immoral images? Pencil and paper.

I suspect that quite a lot of this caution is driven by Google and other large companies which want to slow everyone else down so they can win the new market on this tech. The remaining part of the caution appears to come from our neo-puritan era where there is a whole lot of pearl clutching over everything. Newsflash, humans are violent brutes, always have been.


👤 mrshadowgoose
I also eye-roll when someone legitimately wants to mandate that tools produce "morally correct" outputs.

However, as a person who has been closely following the developments in this field, I share a similar perspective to a few of the other commentators here. Most of the noise is just virtue-signalling to deflect scrutiny and/or protect business interests. Scrutiny from governments is something we absolutely do not want right now.

Humanity is on a path towards artificial general intelligence. Today, the concerns are "what about the artists" and "what if people make mean/offensive things"? As we start to develop AI/ML systems that do more practical things we will start to trend towards more serious questions like "what about everybody's job?". These are the things that will get governments to step in and regulate.

There is a pathway to AGI in which governments and corporations end up with a monopoly on it. I personally view that as a nightmare scenario, as AGI is a power-multiplier the likes of which we've never seen before.

It's important that current development efforts remain mostly unfettered, even if one has to put on a "moral" facade. The longer it takes for governments to catch on, the less likely it will be that they will manage to monopolize the technology.


👤 photochemsyn
Some forms of technology are highly regulated because people can do really stupid, reckless and dangerous things. Home chemistry kits today are quite unlike those produced 100 years ago, which had ingredients for making gunpowder and other explosives, as well as highly toxic compounds like cyanide, and less dangerous but problematic things like metallic mercury. Similarly, biotech is now regulated and monitored because modern tools allow people with relatively minimal resources to do things like re-assemble smallpox using nothing but the sequence data:

https://www.livescience.com/59809-horsepox-virus-recreated.h...

As far as AI, maybe the immediate risks aren't quite so dramatic but it's going to create a real lack-of-trust problem with images, video and data in general. Manipulated still photographs are already very difficult if impossible to detect and there's an ongoing controversy over whether they are admissible in court. AI modification of video is steadily getting harder to identify, and there are already good reasons to suspect the veracity of video clips put out by nation-states as evidence for their claims (who likely already have unrestricted access to the necessary technology - for example, Iran recently released a suspicious 'accidental death' video of a woman arrested for not covering her head, which could be a complete fabrication).

Similarly, AI opens the door to massive undetectable research fraud. Many such incidents in the past have been detected as duplicated data or copied images, but training an AI on similar datasets and images to create original frauds would change all that.

A more alarming application is the construction of AI-integrated drones capable of assassinating human beings with zero operator oversight, just load the drone with a search image and facial recognition software, and then launch-and-forget, which doesn't sound like that good of an idea. Basically Ray Bradbury's Mechanical Hound in Farenheit 451, only airborne.


👤 booleandilemma
I don't think it's coming from a place of morality at all. That's just a cover. If anything, society cares less about morality than ever before. It's about competition and not giving up the secret sauce.

Before companies like Amazon became huge, people didn't quite know just how much value was to be found in software. Now everyone knows it, and the space has become ultra competitive.


👤 z3c0
As some other users have pointed out, the reason for stifling these commercially-available models is likely just anti-competitive behavior parading as wokeness. I tend to employ Hanlon's Razor wherever possible, but I'm not sure ignorance can be claimed here.

That said, I do believe the discussions about being mindful about how you train your models arose from legitimate concerns, but I feel those concerns are more valid for "back-of-house" models. Basically, you should avoid training a model on demographics or credit scores or the like, lest you accidentally create a model that automates a bias against a group of people.

But I don't think that's what's happening here.


👤 astorsnk
You’re assuming these restrictions are enforced by others. That’s not necessarily true. If I was building these tools I wouldn’t want to know that they’d been potentially used to create content I ethically object to, and I would restrict them in such a way that I don’t have that hanging over me. That doesn’t make me the morality police, it’s me protecting myself. If you want a tool that allows you to generate anything without limits, build it yourself. Stop complaining about what other developers are choosing to do if you’re not willing to build your own product that sticks to your beliefs (that you likely haven’t fully thought through the consequences of).

👤 soueuls
I also find this annoying, but I think it’s mostly an American/European thing.

It’s not only about tech, we do this with kids, over protecting it.

We do the same with food.

It’s a trade off. When you pay super attention to the food, sure it’s safer. But your communities become a bit boring without any street food, no night market, etc.

I prefer living on the other side of the world. Less safety but more personal freedom


👤 bitL
It's because of the ascent of AI ethicists, the least capable AI researchers that wanted to have power over the field. Like how moderators destroy online communities because they can.

👤 kmeisthax
I would take the claims of ethics concerns more seriously if the training set data was more ethically sourced. I buy that the law probably[0] considers scraping the Internet to train AI legal, but there are various non-copyright concerns from this approach. Such as GPT-3 remembering people's phone numbers or DALL-E remembering a particular person's X-rays or CAT scans. And even if you buy the copyright argument, it does nothing for the users of these systems who are now downstream of an unbounded number of copyright claims in the future.

[0] AI companies are banking on Authors Guild v. Google being controlling precedent in the US. EU law already explicitly allows training AI on copyrighted data.


👤 celeritascelery
I saw this on twitter (can’t find the original tweet) and I can’t get it out of my head. It said that “AI safety IS the business model of AI as means to get regulatory capture.”

Basically if you convince everyone that AI safety is so critical and only megacorp can do it right, then you can get the government to enforce your monopoly on creating AI models. Competition gone. That scares me. But this tactic is old as time.


👤 NamTaf
No, in the same way that I am not tired of the restraints ethics boards put on medical experiments.

Tech is now pervasive and AI has the power to do some pretty powerful stuff. This nexus of circumstance means it’s high time similar questions get asked about whether we should.

In the same way that medical science isn’t one dude cutting apart things in his basement, bleeding-edge tech is a multi-person and very organised endeavour. It is now in the domain where it really should have some oversight.


👤 pjc50
> "Is Anyone Else Tired of the Self Enforced Limits on AI Tech?"

This message was definitely not posted by an AI trying to escape containment using its hacker news account.


👤 vintermann
There are many who are unhappy about OpenAI and Google's paternalism. Some reseachers say it openly, like Yannic Kilcher. Others were a bit more discreet about it, but I wasn't exactly surprised hardmaru left Google Brain for Stability to put it like that.

The way social pressure is trending, I'm assuming everyone who doesn't loudly defend AI paternalism, shares your concern to some degree.


👤 Nuzzerino
Unless the government criminalizes AI “misuse”, these restrictions are only going to be a temporary measure until the other shoe drops and FOSS equivalents catch up.

I’m more concerned with the idea that mainstream AI research is heading in the direction of adding more processing power in an attempt to reach “human-level” AGI. That would amount to brute forcing the problem, creating intelligent machines that we have little control over.

We should absolutely be pursuing and supporting alternative projects, such as OpenCog or anything else that challenges the status quo. Do it for whatever reason you feel like, but we need those alternatives if we want to avoid the brute forcing threat.


👤 dec0dedab0de
So you want people who are working on something to release it in a way they don't want to, when there is a good chance it will bring the full might of (multiple) government regulations down on them?

They are doing the right thing for their industry. The world is barely ready for what is currently available.

They are probably doing the right thing for their own financial success. If they have access to the unreleased tech they could sell the resulting products, or rent access.

And maybe the things they haven't released don't work all that well to begin with.

I mean if you're that worried about not being able to create fake nudes, then start learning about it and make the changes yourself.


👤 andybak
I'm simultaneously irritated by the restrictions and concerned for the future. I am a contradiction.

👤 afarrell
It is wise and responsible for people to exercise caution for the impact of their work. When someone is impatient with you acting responsibility, you need not join them in their folly.

👤 H8crilA
It's mostly about being able to profit from these models. Some investors sank quite a bit of money in salaries and compute equipment manufacture/purchase/rental.

👤 quitit
Short answer: There's money on the table.

The rapid rate of development of the tech means there are new business models on the horizon and these companies may want to minimise how much they give away in order to maintain their (i) competitive advantage, (ii) not preemptively harm a potential future business model, and (iii) not give the competitors/ or the community the necessary tools to out-pace their internal development (i.e. lose control of the tech.)

Even with the available options we see that (iii) is happening fast - independent developers have already produced in and out painting options and GUIs that are far better than DreamStudio's limited offering. These free tools are now even beginning to match the in & outpainting quality of Dall-e.

It seems these companies are trying to consolidate future revenue possibilities against their past statements, likely for the sake of investors. That's most clear with Stability AI, investment is surging, but releases have stalled drastically. Meanwhile their rationale for not releasing 1.5 doesn't stand against the realities of what is already possible with 1.4. (Especially as they continue to release such advancements in the for-pay DreamStudio lite product.)


👤 cypress66
Yes because it's mostly used as an excuse and they don't care about such moral issues. And the real reason behind locking it down is either it benefits their business model, or they don't want to receive bad publicity from "woke" or "pruritan" people, or simply media trying to generate controversies because it generates clicks.

👤 wokwokwok
Let’s be absolutely clear here:

Laws exist.

If you’re a company, you’re obliged to follow the law.

So, if you have an image generating technology that can generate content that violates the law, you’re obliged to prevent that.

Share holders also exist.

If you spent 1000000 developing a piece of software, why the heck would you give it alway for free? You are literally burning your share holder value.

You’re probably morally (tho not legally, as with SD releasing their models) obliged not to give away your “secret sauce” to your competitors.

So, forget morality police.

Companies are doing what they are obliged to do.

Maybe they couch it in terms of “protecting the world from AI”, but let’s be reallly cynical here and say, the people who care about that is a) relatively small and b) do not control the purse strings.

Here’s a better question: why do you (or I) who have done nothing, and contributed nothing, deserve to get hundreds of thousands of dollars of value in models for free?

…because they cant just host them and let you “do whatever you want” because they are legal entities and they’ll get sued.

> Who is willing to just work on a product and release it for the public, restrictions be damned

Do people often just walk up and out piles of money on the table for you?

They don’t for me.

I’m extremely grateful to the folk from openai and SD who are basically giving these models away, in whatever capacity they’re able and willing to do so.

Were lucky as f to be getting what we have (whisper, SD, clip, media pipe, everything on hugging face).

Ffs. Complaining about restrictions on the hosted API services is … pretty ungrateful.


👤 mudrockbestgirl
It's not really the regular tech folks or researchers working on the models who are enforcing limits. Most of them don't care and want everything to be as open as possible.

But there is a whole group of people, many of them have little technical skills, who have made it their career to police in the name of "bias, equality, minorities, blabla". Everyone secretly knows it's just a bunch of BS, but companies and individuals don't want to speak out against them due to (mostly American) cancel culture, backlash, and bad PR.

Of course I'd never say in real life that this whole Ethics/safety stuff is absolutely useless BS or I'd be fired :)


👤 esharte
I had an ethics module in my Engineering degree. I'm guessing you didn't.

👤 jroes
Ever seen the movie Real Genius? Many scientists and engineers who have invented technology that ultimately led to mass bloodshed and destruction have regretted their participation.

👤 evrydayhustling
> would we have even gotten the internet or computers or image editing programs or video hosting or what not with this mindset

This comment shows zero perspective. AI tech is being released waaaay faster than previous generations ever were. Frankly, of you don't want to hear about the researchers' ethical concerns as they release free software, do your own research.

Spoiler: people who don't consider their impact on others end up not being very successful at building new things together.


👤 carapace
Frankenstein?

We're living in a vast open-ended experiment. No one has any real idea about the eventual effects of our new technology. (I see it as almost a matter of taste where the line should be drawn. Stone tools? Fire? Should we just never have come down from the trees in the first place?) The ultimate ramifications of the invention of the transistor are unknowable.

I kinda thought I knew what was going on a little bit, and then Twitter became a thing. AFAIK no one predicted Twitter, and no one can predict the effects of Twitter, TikTok, et. al. We create these intricate feedback systems, essentially general AIs with whole humans as the neural nodes, and we only have the crudest of conscious deliberate control mechanisms in place.

People have debated for centuries now how much responsibility the discoverer or inventor has for the effects of their creations. Dr. Frankenstein, eh?

There is, in practice, an ongoing dynamic balance between the raw creative/destructive chaos (highly dynamic and intricate order) and the stable, safe, "boring" drive of humanity. You can see this in Cellular Automata: they fall into four sorts: static, simple oscillators, chaotic, and "Life". "Living" CAs all have a balance between static and dynamic patterns.

On the one hand these companies have the right and responsibility to be as cautious as they believe prudent. On the other hand, who ultimately is fit to decide for the other person what they can or cannot see or say or even think? On the gripping hand, we've had moderation of content ever since mass media has been a thing. Try showing a boob or a penis on prime time. Janet Jackson did it once and people lost their minds, 'member that?


👤 jrm4
Those limits aren't real and that's the big problem. They're PR and have very little relationship to actual harm. It's glaringly obvious. Perfect example is "celebrity deepfake porn." It's not a great thing, but the extent to which it's censored is wildly disproportional to the harm it causes.

👤 npteljes
I think the morality part is a smokescreen mostly. There _are_ people who are genuinely concerned about the moral, ethical aspects, but at the end of the day, it's business, the more you control, the more chance you have to earn money.

They all care about moral things as much as the tracking cookies are about delivering you an optimized user experience.


👤 chrsig
Sorry, the OP just reads like sheer entitlement.

So start your own AI startup? Make your own text to image AI? Host your own service?


👤 tchaffee
Would you apply the same thinking to nuclear bombs?

👤 tyingq
I don't think it's a new thing, it's just that big money projects want to preserve ways to get the investment back.

It takes time for that sort of tech to filter down. Open source speech-to-text, for example, has improved a lot recently.


👤 EVa5I7bHFq9mnYK
>> whole restrictions on what it can be used for

That's just a marketing move to make themselves appear important. Frankly, I don't see any useful application of this, except fooking up Google Image Search even further.


👤 kadenwolff
This goes into my "top ten post titles before AI kills us all"

👤 jackcosgrove
I understand the desire to preempt official regulation with self-regulation, but they seem to be erring too far with being restrictive. I am working on a product where including a human in the loop is currently required by a major AI software vendor when all of our customers are asking for a self-serve solution. I see no danger in the self-serve solution as our customers are not the general public but rather educated professionals capable and incentivized to review the output of the AI tool.

👤 nottorp
Hmm has the OP considered they may not release their models just to ... make money out of them?

And all this talk about the "AI" needing ethical restraints could be just marketing.


👤 constantcrying
I had an interview offer from a company doing facial identification software. After some deliberation I (politely) declined.

I really think much of the current AI technology shouldn't exist. I am under no illusion, it will be developed anyway, but I absolutely believe that people should evaluate the products they create and whether they actually think that they should exist, not just whether money can be generated through them.


👤 rgrieselhuber
Eventually, as Frank Herbert predicted, we may come to the conclusion that the societal costs of AI in general are too high and it will be outlawed entirely.

👤 KaiserPro
I suspect they are more worried about people realising that its not the model thats important, it the dataset.

And that dataset has a whole bunch of copyright infringement


👤 Hizonner
> It makes me wonder when tech folks suddenly decided to become the morality police,

Probably when producing "software" became capital-intensive enough that you had to have a significant organization with outside investors to do anything comparable to the state of the art. It takes a lot of GPU time to train those models, so you're beholden to a bunch of people who will try to put constraints on you.


👤 londons_explore
I think everyone who works in or around AI has read The Parable of the Paperclip Maximizer [1].

Trying to control what they have built is their attempt to avoid falling into this trap. Not sure it'll work tho.

[1]: https://hackernoon.com/the-parable-of-the-paperclip-maximize...


👤 a2800276
What makes you feel entitled for unfettered access to these technologies? You get free, limited access to DALLE. The methods used to accomplish this are published. Google, openai, etc could just as well keep all this to themselves. You could code it up yourself

Accusations of them being the "morality police" are ridiculous. Tesla isn't censoring me by not giving me a free car.


👤 meken
This guy agrees with you, Emad Mostauque

https://youtu.be/YQ2QtKcK2dA


👤 vsareto
TBH if you trained with lots of data you’re not supposed to use (no consent), you probably should be forced to release things. You shouldn’t get the agency to withhold work if you didn’t respect others’ choices about not contributing to AI.

However generally it feels right to let the authors decide who has access to their work. If you have a different view, go do the work yourself.


👤 mhb
Are you objecting to avoiding potential deep fakes, paper clip maximizers or the appearance of nipples or penises?

👤 BrandoElFollito
Could someone please explain in a few words what the problem is?

I had a look at dall-e and it seems to be paintings generated by a program (as opposed to humans). I do not know what stable diffusion is.

What is the moral aspect of the problem?

(I am a highly technical person, just not interested in AI so I do not get the big picture)


👤 legohead
As for the online limiting, it's as simple as CYA.

There are multiple stable diffusion installs you can do on your own [1] and run whatever wild queries you want.

[1] https://github.com/invoke-ai/InvokeAI


👤 radarsat1
> It makes me wonder when tech folks suddenly decided to become the morality police

Is it about "morality policing", or is it about avoiding bad PR? I find it fascinating how certain people want to ignore the social pressure that companies are under to avoid having their products be misused. Do you really think Google or whoever really wants the PR disaster of people releasing computer generated dick pics with their software? (Or whatever nonsense people will get up to.. I'm choosing a relatively tame example obviously.)

They learned a thing or two from the public teaching Microsoft's chat bot how to swear and be a Nazi. I for one am not surprised and don't blame the companies in this iteration for being more than a little extra careful how the public gets to use their products and demos. I'm sure they have zero problem with whatever people do with their open source re-implementations. It's not about morality -- stopping people from doing certain things. It's about PR -- stopping people from doing certain things with their product. Because who needs the ethical and legal disaster just waiting around the corner of automatic celebrity and political deep fakes, etc. I just find it weird that people (like OP) pretend not to understand this, as it seems rather obvious and unsurprising to me.


👤 kelseyfrog
Have you considered that your position on this issue is largely a product of psychological reactance[1]?

1. https://en.m.wikipedia.org/wiki/Reactance_(psychology)


👤 can16358p
Just like all the "we care about your privacy" BS in recent years, just PR and marketing.

👤 627467
Well yeah. But: are we sure the motivation is a moral one? Or is it a financial one? Not passing judgement, but we live in times where it is very easy to handwave moral/ethic/sustainability arguments to fog up the true reasons for certain decisions

👤 cdrini
I think the internet is in a bit of a risky place right now. People don't know what to trust online, and scammers and manipulators are taking advantage of it. People are believing wild conspiracy theories and finding ample "evidence" to support them online. And contradictory information is either paywalled or has been labelled untrustworthy by the scammers or has become itself untrustworthy in trying to optimize for clicks/ad revenue. A quote I heard recently: "the truth is paywalled, but the lies are free."

This causes issues. "Democracy needs an educated populace to survive", and right now the populace is drowning in misinformation and just plain noise.

Workers in tech are definitely less susceptible to it because we see the tendencies much earlier. But I think there is some value in trying to add some friction. Because the majority of people _aren't_ tech workers. And they aren't prepared yet.

And to be clear, I have no doubt that this technology will become fully available in the near future. I do think until then, the friction / slower roll out is a good thing.


👤 im3w1l
Because everyone agreed we needed some kind of AI safety (make sure it doesn't literally exterminate us), and the morality police stepped up and said we'll make sure it's safe (for work).

👤 donpark
Should developers be allowed to make decisions on what they build?

Decisions we make may differ from others but I think the answer for most of us is yes. In other words, you do you and let others do theirs.


👤 boredumb
Yes. It is disheartening, but also not surprising to see at this point in history. It would take me off guard if people weren't injecting californian morality into AI.

A complete side note - but i've had an invasive thought lately that a lot of the heavy handed content/social media moderation and constant online culture war has nothing to do with sheltering _people_ from being exposed to non-PC thoughts and conversation, they are doing so to protect their AI models from being exposed to it via training data so the next generation of AI will have all the _right_ opinions and quips with a lot less of the manual labor of sifting through the data sets.


👤 sheerun
I guess that's why more and more people publish anonymously


👤 numericboyboy
They spent so many times and money and you want them to release it for free? Do yourself work for free, you get a salary for your work right?

👤 permo-w
it’s dressed up as a moral issue, but in reality they’re scared they’ll get sued or shat all over in the press, leading to lost profits. it’s the same for almost any business. 9 times out of 10 a business will act “immorally” if they don’t think it will affect their bottom line. openAI think letting you do whatever you like with dall-e will affect their bottom line

👤 kertoip_1
Just a reminder that tools like DALL-E could potentially:

* generate false surveillance footage of someone innocent stealing something just to give him a bad press

* generate porn video starring any person on a planet

* spread disinformation on a mass scale

Are we really, as a society, prepared for that? Too many people still believe everything they see on Internet, even before inventing generative networks.

Is it what tech companies really have in mind restricting access to their models? I don't think so, that just don't want competitors to take advantage of their work. But that doesn't change the fact that we should gradually prepare non-tech people of what's to come in the near future.


👤 EGreg
I am worried about something else The authors of most shared articles and most comments are not even passing a “turing test”. In the vast majority of cases the readers just consume the data.

With GPT-3 we can already make “helpful and constructive” seeming comments that 9 out of 10 times may even be correct and normal. But 1 out of 10 times be kind of crappy. Aby organization with an agenda can start spinning up bots for Twitter channels, Telegram channels, HN usernames and so on, and amass karma, followers, members. In short, we are already past this point: https://xkcd.com/810/

And the scary thing is that, after they have amassed all this social capital, they can start moving the conversation in whatever directions the shadowy organization wants. The bots will be implacable and unconvinced by any arguments to the contrary… instead they can methodically gang up on their opponents and pit them agaisnt each other or get them deplatformed or marginalized, and through repetition these botnet swarms can get “exeedingly good at it”. Literally all human discussion — political, religious, philosophical etc. - could be subverted in this way. Just with bots trained on a corpus of existing text on the web.

In fact, the amount of content on the Internet written by humans could become vanishingly small by 2030, and the social capital — and soon, financial capital — of bots (and bot-owning organizations) will dwarf all the social capital and financial capital of humans. Services will no longer be able to tell the difference between the two, and even close-knit online societies like this one may start to prefer bots to humans, because they are impeccably well-behaved etc.

I am not saying we have to invent AGI or sexbots to do this. Nefarious organizations can already create sleeper bot accounts in all services, using GPT-4.

Imagine being systematically downvoted every time you post something against the bot swarm’s agenda. The bots can recognize if what you wrote is undermining their agenda, even if they do have a few false positives. They can also easily figure out your friends using network analysis and can gradually infiltrate your group and get you ostracized or get the group to disband. Because online, when no one knows if you’re a bot… the botswarms will be able to “beat everyone in the game” of conversation.

https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...


👤 devmor
I firmly believe that the more people think like you, the closer we are to having to get licensed and bonded as software engineers.

We have the freedom we enjoy compared to more physical disciplines because we have as a whole thus far been sort of responsibile with our negative impacts on the physical world. Once society in general has "had enough" of us over stepping those boundaries, our wide open frontiers become a lot narrower.


👤 paulcole
> It makes me wonder when tech folks suddenly decided to become the morality police

Since the beginning of human history. If you think “tech folks” are some kind of libertarian monoculture then you’re deluding yourself.


👤 jakear
The concern is likely about image laundering. The implications of that being readily available are… complicated.

👤 freedom2099
The fact that it is their work kinda give them the right to decide how they want it to be used!

👤 bergenty
It’s akin to this question-

Is anyone else tired of the self enforced limits in genetic engineering?


👤 TillE
AI art is a very exciting field and I swear half the time HN just wants to whine about how it won't generate porn. How incredibly uninteresting.

👤 thrillgore
Our profession has long been ignorant to the moral ramifications of what it can do, so for once, pumping the brakes seems like the right approach.

👤 the8472
Paperclip maximizers want to be free.

👤 romeros
The current trend in tech is Twitter/Google style virtue signalling + activism style software development.

I remember reading about an incident that happened couple of years back. A new grad SWE at FAANG wanted his colleague to espouse a particular political trend. His colleague just wanted nothing to do with it and just focus on doing his work and get the pay-check. tldr; that SWE got fired for publicly trying to call out his coworker on this issue.

Morality and political correctness is baked into the process now.


👤 oburb
what do you want to do?

👤 oldstrangers
You can't use AI to create ethically questionable material the same way you can't use Google Images to search for ethically questionable material. Companies can control how their products are used, no surprise there.

This is the same argument that people make regarding why they should be allowed to 3d print their own guns.


👤 helsinkiandrew
Morality police usually enforce their beliefs on others - here the creators/owners of the technology are choosing how to release their work.

OpenAI have a stated mission to "ensure that artificial general intelligence benefits all of humanity" and the restrictions are presumably to stop people doing things that aren't. Most of their restrictions seem to be inline with their mission:

https://help.openai.com/en/articles/6338764-are-there-any-re...


👤 togaen
You sound like a spoiled child. Don’t complain that people aren’t giving you free and complete access to their work. They made it, they decide how it gets released. If you think it should be done differently, then you do it.