I don’t write open source public software with a restricted license, but I understand why some people do. I respect that, and I probably use a lot of the software without realizing it. I can understand why they’d be bothered by theft of their work, and I could see downstream effects such as them stopping their work that would eventually affect me.
The legal and ethical aspects are fascinating, but I have no interest in that debate.
I care much more about my job security as a software engineer and systems architect, and on that front Copilot does nothing to challenge that.
Sure, it could put a lot of programmers out of a job in theory.
In practice, the emergency brake alert goes off for cars waiting to turn in the oncoming lane, the blind spot warnings trigger on cars that are ahead of you, the lane departure warnings are usually wrong, and so on.
I'm not worried, and I prefer to avoid the uncertainty even if it means outsourcing a bit less effort to the magic AI pixies.
I'm bewildered why nobody else has brought this up.
When Copilot guesses the next method name or comment I was going to type, it's doing that by saying "this would be the most boring, average string of tokens to come next, so here you go," and it's fascinating how often that's right -- how often I'm wrong about how surprising the next line was. It's like how terrible humans are at generating unique passwords, except for everything I type. Copilot doesn't help by knowing things I don't, because it doesn't know anything, but it does help by guessing what I was obviously going to do next without me having to call out to memory.
Once I have access to that average-of-humanity information for a while, I start to want it for the rest of my life too. OK, fine, that's the next method name I was going to write. [tab, autocomplete]. OK, fine, that's how I was going to close out my email. [tab, autocomplete]. Well, huh, I wonder if it knew what I was going to type next on the command line? [yes, probably]. I wonder if it knew which things I was going to buy in the grocery store? [yes, probably]. It starts to feel limiting to not have access to what the average next step in the sequence would be.
And then it turns out that average-of-humanity models have all kinds of potential impacts on political power and labor and property law and so on, so all of that is pretty interesting too. But for me it starts with just poking at the model and going, oh, hey, it's ... everyone, how are you all doing?
On the other hand, I find myself torn between "information wants to be free and this is one more nail in the coffin of our odd and ahistorical concepts of 'intellectual property' and 'plagiarism'" and "oh great, another way for giant corporations to reap all the benefits from work done by individuals and smaller businesses".
I don't think I'll ever use the thing, and I have ~0 power over the societal implications, so overall - yeah, can't get that exercised over it either.
These systems are a mechanism that can regurgitate (digest, remix, emit) without attribution all of the world's open code and all of the world's art.
With these systems, you're giving everyone the ability to plagiarize everything, effortlessly and unknowingly. No skill, no effort, no time required. No awareness of the sources of the derivative work.
My work is now your work. Everyone and his 10-year old brother can "write" my code (and derivatives), without ever knowing I wrote it, without ever knowing I existed. Everyone can use my hard work, regurgitated anonymously, stripped of all credit, stripped of all attribution, stripped of all identity and ancestry and citation.
It's a new kind of use not known (or imagined?) when the copyright laws were written.
Training must be opt in, not opt out.
Every artist, every creative individual, must EXPLICITLY OPT IN to having their hard work regurgitated anonymously by Copilot or Dall-E or whatever.
If you want to donate your code or your painting or your music so it can easily be "written" or "painted", in whole or in part, by everyone else, without attribution, then go ahead and opt in.
But if an author or artist does not EXPLICITLY OPT IN, you can't use their creative work to train these systems.
All these code/art washing systems, that absorb and mix and regurgitate the hard work of creative people must be strictly opt in.
If you don't care about this, it's naivete, or a lack of foresight, or apathy as these companies pillage the commons. Not something to be proud of.
Microsoft and OpenAI (and others) are robbing us and you should care.
I was excited when it first came out but I'm just over it now.
Maybe it's good for some things, but I wasn't impressed. My job's safe for a little while at least.
The way it felt to me, it samples all kinds of other people's code(probably from github)(who owns the copyrights here?) and pastes their code. Except, what's the quality of that code? I'm by no means a top developer but the recommendations were always trash and not what I wanted.
I don't actually see myself using it, and all my source is out there (for the most part, anyway), as MIT-licensed.