It makes me wonder when tech folks suddenly decided to become the morality police, and refuse to just release products in case the 'wrong' people make use of them for the 'wrong' purposes. Like, would we have even gotten the internet or computers or image editing programs or video hosting or what not with this mindset?
So is there anyone working in this field who isn't worried about this? Who is willing to just work on a product and release it for the public, restrictions be damned? Someone who thinks tech is best released to the public to do what they like with, not under an ultra restrictiveset of guidelines?
99% of those using this tactics use it to justify not releasing their models to avoid giving competition a leg up(Google, openAI) and to pretend they are for "open research". As I said this is 100% bull.
The remaining 1% are either doing this to inflate their egos ("hey look how considerate and enlightened we are in everything we do!"), or they pander to media/silly politicians/various clueless commentators whose level of knowledge about this technology is null. They regurgitate the same set of "what ifs and horror stories" to scare the public into standing by when they attempt to over regulate another field so they can be kingmakers within it(if you want an example how it works look at the energy sector).
All this sillyness accomplishes is to raise a barrier to entry for potential commercial competition. Bad actors will have enough money/lack scruples to train their own models or to steal your best ones regardless how "impact conscious" your company it.
Now, I don't claim everyone should be forced to publish their AI models. No, if you spent lots of money on training your model it is yours. But you can't lock all your work behind closed doors and call yourself open. It doesn't work like this. One important point is that there is value even in just publishing a paper demonstrating some achievements of a proprietary model, but if the experiment can't be reproduces based on description given that is not science and for sure it is not open.
Twenty years ago we were free to do whatever we want, because it didn't matter. Nowadays everyone uses tech as much as they use stairs. You can't build stairs without railings though.
Keeping the window for abuse small is beneficial to the whole industry. Otherwise bad press will put pressure on politicians to "do something about it" resulting in faster and more excessive regulations.
That we have now run into a technology which makes many of _us_ uncomfortable should give you pause for thought and reflection.
But when morality suddenly is reinforced in an area where the same people espousing it are trying to rapidly earn billions of dollars, I am skeptical.
Transformers are a form of translation and information compression, ultimately.
The morality seems to me at this point a convenient smokescreen to hide the fact that these companies are not actually open source, that they are not there for the public benefit, and that they are not very different to venture-backed businesses that have come before.
What is the risk of open-sourcing the product? Very few individuals could assemble a dataset or train a full competitive model on their own hardware. So not really a competitive risk there. But every big corp could.
The morality angle protects the startups from the big six. SD is a product demo. I view it the same way at the highest level as an alpha version of Google translate.
I suspect that quite a lot of this caution is driven by Google and other large companies which want to slow everyone else down so they can win the new market on this tech. The remaining part of the caution appears to come from our neo-puritan era where there is a whole lot of pearl clutching over everything. Newsflash, humans are violent brutes, always have been.
However, as a person who has been closely following the developments in this field, I share a similar perspective to a few of the other commentators here. Most of the noise is just virtue-signalling to deflect scrutiny and/or protect business interests. Scrutiny from governments is something we absolutely do not want right now.
Humanity is on a path towards artificial general intelligence. Today, the concerns are "what about the artists" and "what if people make mean/offensive things"? As we start to develop AI/ML systems that do more practical things we will start to trend towards more serious questions like "what about everybody's job?". These are the things that will get governments to step in and regulate.
There is a pathway to AGI in which governments and corporations end up with a monopoly on it. I personally view that as a nightmare scenario, as AGI is a power-multiplier the likes of which we've never seen before.
It's important that current development efforts remain mostly unfettered, even if one has to put on a "moral" facade. The longer it takes for governments to catch on, the less likely it will be that they will manage to monopolize the technology.
https://www.livescience.com/59809-horsepox-virus-recreated.h...
As far as AI, maybe the immediate risks aren't quite so dramatic but it's going to create a real lack-of-trust problem with images, video and data in general. Manipulated still photographs are already very difficult if impossible to detect and there's an ongoing controversy over whether they are admissible in court. AI modification of video is steadily getting harder to identify, and there are already good reasons to suspect the veracity of video clips put out by nation-states as evidence for their claims (who likely already have unrestricted access to the necessary technology - for example, Iran recently released a suspicious 'accidental death' video of a woman arrested for not covering her head, which could be a complete fabrication).
Similarly, AI opens the door to massive undetectable research fraud. Many such incidents in the past have been detected as duplicated data or copied images, but training an AI on similar datasets and images to create original frauds would change all that.
A more alarming application is the construction of AI-integrated drones capable of assassinating human beings with zero operator oversight, just load the drone with a search image and facial recognition software, and then launch-and-forget, which doesn't sound like that good of an idea. Basically Ray Bradbury's Mechanical Hound in Farenheit 451, only airborne.
Before companies like Amazon became huge, people didn't quite know just how much value was to be found in software. Now everyone knows it, and the space has become ultra competitive.
That said, I do believe the discussions about being mindful about how you train your models arose from legitimate concerns, but I feel those concerns are more valid for "back-of-house" models. Basically, you should avoid training a model on demographics or credit scores or the like, lest you accidentally create a model that automates a bias against a group of people.
But I don't think that's what's happening here.
It’s not only about tech, we do this with kids, over protecting it.
We do the same with food.
It’s a trade off. When you pay super attention to the food, sure it’s safer. But your communities become a bit boring without any street food, no night market, etc.
I prefer living on the other side of the world. Less safety but more personal freedom
[0] AI companies are banking on Authors Guild v. Google being controlling precedent in the US. EU law already explicitly allows training AI on copyrighted data.
Basically if you convince everyone that AI safety is so critical and only megacorp can do it right, then you can get the government to enforce your monopoly on creating AI models. Competition gone. That scares me. But this tactic is old as time.
Tech is now pervasive and AI has the power to do some pretty powerful stuff. This nexus of circumstance means it’s high time similar questions get asked about whether we should.
In the same way that medical science isn’t one dude cutting apart things in his basement, bleeding-edge tech is a multi-person and very organised endeavour. It is now in the domain where it really should have some oversight.
This message was definitely not posted by an AI trying to escape containment using its hacker news account.
The way social pressure is trending, I'm assuming everyone who doesn't loudly defend AI paternalism, shares your concern to some degree.
I’m more concerned with the idea that mainstream AI research is heading in the direction of adding more processing power in an attempt to reach “human-level” AGI. That would amount to brute forcing the problem, creating intelligent machines that we have little control over.
We should absolutely be pursuing and supporting alternative projects, such as OpenCog or anything else that challenges the status quo. Do it for whatever reason you feel like, but we need those alternatives if we want to avoid the brute forcing threat.
They are doing the right thing for their industry. The world is barely ready for what is currently available.
They are probably doing the right thing for their own financial success. If they have access to the unreleased tech they could sell the resulting products, or rent access.
And maybe the things they haven't released don't work all that well to begin with.
I mean if you're that worried about not being able to create fake nudes, then start learning about it and make the changes yourself.
The rapid rate of development of the tech means there are new business models on the horizon and these companies may want to minimise how much they give away in order to maintain their (i) competitive advantage, (ii) not preemptively harm a potential future business model, and (iii) not give the competitors/ or the community the necessary tools to out-pace their internal development (i.e. lose control of the tech.)
Even with the available options we see that (iii) is happening fast - independent developers have already produced in and out painting options and GUIs that are far better than DreamStudio's limited offering. These free tools are now even beginning to match the in & outpainting quality of Dall-e.
It seems these companies are trying to consolidate future revenue possibilities against their past statements, likely for the sake of investors. That's most clear with Stability AI, investment is surging, but releases have stalled drastically. Meanwhile their rationale for not releasing 1.5 doesn't stand against the realities of what is already possible with 1.4. (Especially as they continue to release such advancements in the for-pay DreamStudio lite product.)
Laws exist.
If you’re a company, you’re obliged to follow the law.
So, if you have an image generating technology that can generate content that violates the law, you’re obliged to prevent that.
Share holders also exist.
If you spent 1000000 developing a piece of software, why the heck would you give it alway for free? You are literally burning your share holder value.
You’re probably morally (tho not legally, as with SD releasing their models) obliged not to give away your “secret sauce” to your competitors.
So, forget morality police.
Companies are doing what they are obliged to do.
Maybe they couch it in terms of “protecting the world from AI”, but let’s be reallly cynical here and say, the people who care about that is a) relatively small and b) do not control the purse strings.
Here’s a better question: why do you (or I) who have done nothing, and contributed nothing, deserve to get hundreds of thousands of dollars of value in models for free?
…because they cant just host them and let you “do whatever you want” because they are legal entities and they’ll get sued.
> Who is willing to just work on a product and release it for the public, restrictions be damned
Do people often just walk up and out piles of money on the table for you?
They don’t for me.
I’m extremely grateful to the folk from openai and SD who are basically giving these models away, in whatever capacity they’re able and willing to do so.
Were lucky as f to be getting what we have (whisper, SD, clip, media pipe, everything on hugging face).
Ffs. Complaining about restrictions on the hosted API services is … pretty ungrateful.
But there is a whole group of people, many of them have little technical skills, who have made it their career to police in the name of "bias, equality, minorities, blabla". Everyone secretly knows it's just a bunch of BS, but companies and individuals don't want to speak out against them due to (mostly American) cancel culture, backlash, and bad PR.
Of course I'd never say in real life that this whole Ethics/safety stuff is absolutely useless BS or I'd be fired :)
This comment shows zero perspective. AI tech is being released waaaay faster than previous generations ever were. Frankly, of you don't want to hear about the researchers' ethical concerns as they release free software, do your own research.
Spoiler: people who don't consider their impact on others end up not being very successful at building new things together.
We're living in a vast open-ended experiment. No one has any real idea about the eventual effects of our new technology. (I see it as almost a matter of taste where the line should be drawn. Stone tools? Fire? Should we just never have come down from the trees in the first place?) The ultimate ramifications of the invention of the transistor are unknowable.
I kinda thought I knew what was going on a little bit, and then Twitter became a thing. AFAIK no one predicted Twitter, and no one can predict the effects of Twitter, TikTok, et. al. We create these intricate feedback systems, essentially general AIs with whole humans as the neural nodes, and we only have the crudest of conscious deliberate control mechanisms in place.
People have debated for centuries now how much responsibility the discoverer or inventor has for the effects of their creations. Dr. Frankenstein, eh?
There is, in practice, an ongoing dynamic balance between the raw creative/destructive chaos (highly dynamic and intricate order) and the stable, safe, "boring" drive of humanity. You can see this in Cellular Automata: they fall into four sorts: static, simple oscillators, chaotic, and "Life". "Living" CAs all have a balance between static and dynamic patterns.
On the one hand these companies have the right and responsibility to be as cautious as they believe prudent. On the other hand, who ultimately is fit to decide for the other person what they can or cannot see or say or even think? On the gripping hand, we've had moderation of content ever since mass media has been a thing. Try showing a boob or a penis on prime time. Janet Jackson did it once and people lost their minds, 'member that?
They all care about moral things as much as the tracking cookies are about delivering you an optimized user experience.
So start your own AI startup? Make your own text to image AI? Host your own service?
It takes time for that sort of tech to filter down. Open source speech-to-text, for example, has improved a lot recently.
That's just a marketing move to make themselves appear important. Frankly, I don't see any useful application of this, except fooking up Google Image Search even further.
And all this talk about the "AI" needing ethical restraints could be just marketing.
I really think much of the current AI technology shouldn't exist. I am under no illusion, it will be developed anyway, but I absolutely believe that people should evaluate the products they create and whether they actually think that they should exist, not just whether money can be generated through them.
And that dataset has a whole bunch of copyright infringement
Probably when producing "software" became capital-intensive enough that you had to have a significant organization with outside investors to do anything comparable to the state of the art. It takes a lot of GPU time to train those models, so you're beholden to a bunch of people who will try to put constraints on you.
Trying to control what they have built is their attempt to avoid falling into this trap. Not sure it'll work tho.
[1]: https://hackernoon.com/the-parable-of-the-paperclip-maximize...
Accusations of them being the "morality police" are ridiculous. Tesla isn't censoring me by not giving me a free car.
However generally it feels right to let the authors decide who has access to their work. If you have a different view, go do the work yourself.
I had a look at dall-e and it seems to be paintings generated by a program (as opposed to humans). I do not know what stable diffusion is.
What is the moral aspect of the problem?
(I am a highly technical person, just not interested in AI so I do not get the big picture)
There are multiple stable diffusion installs you can do on your own [1] and run whatever wild queries you want.
Is it about "morality policing", or is it about avoiding bad PR? I find it fascinating how certain people want to ignore the social pressure that companies are under to avoid having their products be misused. Do you really think Google or whoever really wants the PR disaster of people releasing computer generated dick pics with their software? (Or whatever nonsense people will get up to.. I'm choosing a relatively tame example obviously.)
They learned a thing or two from the public teaching Microsoft's chat bot how to swear and be a Nazi. I for one am not surprised and don't blame the companies in this iteration for being more than a little extra careful how the public gets to use their products and demos. I'm sure they have zero problem with whatever people do with their open source re-implementations. It's not about morality -- stopping people from doing certain things. It's about PR -- stopping people from doing certain things with their product. Because who needs the ethical and legal disaster just waiting around the corner of automatic celebrity and political deep fakes, etc. I just find it weird that people (like OP) pretend not to understand this, as it seems rather obvious and unsurprising to me.
This causes issues. "Democracy needs an educated populace to survive", and right now the populace is drowning in misinformation and just plain noise.
Workers in tech are definitely less susceptible to it because we see the tendencies much earlier. But I think there is some value in trying to add some friction. Because the majority of people _aren't_ tech workers. And they aren't prepared yet.
And to be clear, I have no doubt that this technology will become fully available in the near future. I do think until then, the friction / slower roll out is a good thing.
Decisions we make may differ from others but I think the answer for most of us is yes. In other words, you do you and let others do theirs.
A complete side note - but i've had an invasive thought lately that a lot of the heavy handed content/social media moderation and constant online culture war has nothing to do with sheltering _people_ from being exposed to non-PC thoughts and conversation, they are doing so to protect their AI models from being exposed to it via training data so the next generation of AI will have all the _right_ opinions and quips with a lot less of the manual labor of sifting through the data sets.
* generate false surveillance footage of someone innocent stealing something just to give him a bad press
* generate porn video starring any person on a planet
* spread disinformation on a mass scale
Are we really, as a society, prepared for that? Too many people still believe everything they see on Internet, even before inventing generative networks.
Is it what tech companies really have in mind restricting access to their models? I don't think so, that just don't want competitors to take advantage of their work. But that doesn't change the fact that we should gradually prepare non-tech people of what's to come in the near future.
With GPT-3 we can already make “helpful and constructive” seeming comments that 9 out of 10 times may even be correct and normal. But 1 out of 10 times be kind of crappy. Aby organization with an agenda can start spinning up bots for Twitter channels, Telegram channels, HN usernames and so on, and amass karma, followers, members. In short, we are already past this point: https://xkcd.com/810/
And the scary thing is that, after they have amassed all this social capital, they can start moving the conversation in whatever directions the shadowy organization wants. The bots will be implacable and unconvinced by any arguments to the contrary… instead they can methodically gang up on their opponents and pit them agaisnt each other or get them deplatformed or marginalized, and through repetition these botnet swarms can get “exeedingly good at it”. Literally all human discussion — political, religious, philosophical etc. - could be subverted in this way. Just with bots trained on a corpus of existing text on the web.
In fact, the amount of content on the Internet written by humans could become vanishingly small by 2030, and the social capital — and soon, financial capital — of bots (and bot-owning organizations) will dwarf all the social capital and financial capital of humans. Services will no longer be able to tell the difference between the two, and even close-knit online societies like this one may start to prefer bots to humans, because they are impeccably well-behaved etc.
I am not saying we have to invent AGI or sexbots to do this. Nefarious organizations can already create sleeper bot accounts in all services, using GPT-4.
Imagine being systematically downvoted every time you post something against the bot swarm’s agenda. The bots can recognize if what you wrote is undermining their agenda, even if they do have a few false positives. They can also easily figure out your friends using network analysis and can gradually infiltrate your group and get you ostracized or get the group to disband. Because online, when no one knows if you’re a bot… the botswarms will be able to “beat everyone in the game” of conversation.
https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...
We have the freedom we enjoy compared to more physical disciplines because we have as a whole thus far been sort of responsibile with our negative impacts on the physical world. Once society in general has "had enough" of us over stepping those boundaries, our wide open frontiers become a lot narrower.
Since the beginning of human history. If you think “tech folks” are some kind of libertarian monoculture then you’re deluding yourself.
Is anyone else tired of the self enforced limits in genetic engineering?
I remember reading about an incident that happened couple of years back. A new grad SWE at FAANG wanted his colleague to espouse a particular political trend. His colleague just wanted nothing to do with it and just focus on doing his work and get the pay-check. tldr; that SWE got fired for publicly trying to call out his coworker on this issue.
Morality and political correctness is baked into the process now.
This is the same argument that people make regarding why they should be allowed to 3d print their own guns.
OpenAI have a stated mission to "ensure that artificial general intelligence benefits all of humanity" and the restrictions are presumably to stop people doing things that aren't. Most of their restrictions seem to be inline with their mission:
https://help.openai.com/en/articles/6338764-are-there-any-re...