Today was my first full day of coding inducted into Copilot. I know a lot of you folks have had the fun of playing with it for a long time.
I have a particular way of writing code after 25 years, certain idiosyncrasies. Just as a cheap shot example, I write mysql queries in PHP by defining the whole string first as a variable named $someQ = "SELECT * FROM table WHERE id=:id;" and then, rather than binding the variables, I prefer to compile the query as $someZ = $conn->prepare($someQ); and then $someZ->execute(array(":id"=>$id)).
This isn't a usual way of doing things. Copilot tried to bind the vars the first time I did it. The second time, it set up the array. The third time, when I wrote $insQ as the name of the an insertion string, and $insZ as the query that would execute it, Copilot wrote $updQ and $updZ by itself as the names of variables for the update string and update query. For the record, I checked and I've never named a variable $updQ or $updZ... I usually use $uQ and $uZ. It made that up. It stopped trying to bind vars and inserted them into the execute statement. It guessed contextually almost 100% which vars I wanted to bind. This was beyond magic; obviously it read a huge amount of unrelated code in my stack to come to those conclusions from this one small file. But no one besides me uses this naming convention.
It was magic; scary magic.
I spent my evening shooting pool and drinking with bar friends; one's an ex-marine who works for the government and another's a guy who manages ops and software for a small company. They're lamenting that it's impossible in this country for anyone to make enough to afford an apartment. This never seemed hard to me. I just wrote code and got paid a lot for being good at it. Or I made graphics that could probably now be done by DALL-E.
It's been my opinion for awhile that consumers and users of software were redundant. Maybe now we're redundant, too. But in that case, who is talking to whom? What human born today has the experience of struggling to learn something, and what would they accomplish if they did learn it if a giant cloud filter could already translate their thoughts into art or code that would take them years of experience to comprehend, let alone write for themselves?
Are we the last generation to learn skills? And what happens when everyone's skill set is just telling a cloud filter what to make for them? Will that be okay...?
I feel like this is fundamentally different from the shift from Assembly to C to BASIC to Java or the fact that coders now don't think in terms of the metal. This is not "clip art" taking over the illustration space.
I feel like I've woken up halfway through a wholesale replacement of all creative industries with robots, at a point where all consumers (or measurements of consumers) have been fully replaced by robots. Having been in a niche for so long, I ignored it and thought that it would never be a real threat. I'm admitting now that it is.
Where do we go from here? How do we avoid a dead creative class spamming dead code and dead art to an already-dead consumer internet?
Worth noting that people have had these sorts of worries for over 200 years - see e.g. the Luddites[2] and Luddite fallacy[3]. While new forms of technology do make certain forms of employment obsolete, they also introduce many new forms, e.g. who would have thought 20 years ago that there would be people making a living from being a "social media influencer", or even writing and maintaining software like Copilot.
[0] https://en.wikipedia.org/wiki/GPT-3
[1] https://en.wikipedia.org/wiki/DALL-E
[2] https://en.wikipedia.org/wiki/Luddite
[3] https://en.wikipedia.org/wiki/Technological_unemployment#The...
One of the criteria I have for a professional is to understand how their work fits into the wider context, and be able to juggle multiple vantage points. Most developers now are amateurs that just stitch together code from npm, Stack Overflow and random github repos to create unmaintainable messes. Their work might be made redundant, or even sadder it might be that quality will become even more irrelevant, and all that matters is how well developers can sell themselves.
Whatever might happen, solid fundamentals and knowing how to best abstract the real world with minimal complexity will still be useful in the next 10-20 years.
And when you find me an AI that can do this, I'll happily concede the goddamn planet to them.
Creative jobs will be the only ones that will remain one day, but the meaning of "creative" could be redefined or expanded in the coming decades.
It was best at writing comments on a function that had already been written.
There were one or two times that it guessed correctly at code I wanted.
We're not dead/redundant yet, IMHO.
It would be nice actually to not have to have skills only tangentially or coincidentally relevant to the task. Our skills are often a thing in itself, e.g. you may know how to write SQL, but what you really want is to run (or to help run) a business. Is writing SQL a skill? Yes. Is it relevant to what you really want? Barely. The same for washing brushes, setting up routers, cutting vegetables.
We drown in ourself invented complexity and are scared of the perspective of living a true life. Those who were sleeping cogs will wake up. Those who want to do SQL will do it anyway, as a hobby. But what will they do to make a living? Haha, why would we bigcorps and elites even care. You’ll do something, maybe. We just have to make sure you can’t riot.
In that case, there is an information bottleneck. If you, as a single entity, have an idea and skills needed to translate it, then internal communication in your brain is very fast. But when we as a creator have an idea and describe it to AI (or even other humans), there is some very lossy compression going on.
Creative jobs in 10 years will probably shift to focus more on being able to communicate those ideas well to "execution units" and back, so the skill to learn would be mapping some internal "mindset" of an AI to predict outcomes.
Due to the tools becoming better, I already had to unlearn a few skills. For instance, 20 years ago we had to write things like ( x >> 4 ) instead of ( x / 16 ), or use tricky fixed-point math instead of floats. Both were performance optimizations: compilers were less than brilliant (modern ones are reliably optimizing integer divisions into these bitwise tricks), and CPUs were much slower (modern processors are faster at multiplying or dividing floats than integers, 20 years ago was the opposite with a huge difference).
It's not just compilers and hardware, other software too. Debuggers, profilers, operating systems, third-party libraries are becoming strictly better and more capable over time.
However, I would not say the job became easier.
The software being developed became way more complicated overall: more features, and more complicated features. For instance, 2D GUI is now expected to include animated transitions, inertial scrolling, anti-aliased fonts with hinting and supports for these modern colored hieroglyphs, vector graphics instead of bitmaps for DPI scaling.
The hardware and environment are more complicated too. All CPUs have SIMD now, and with modern GPUs I’m often programming two different chips in parallel instead of just 1, orchestrating the interaction between them. Modern networking and cloud stuff is remarkably tricky to use. Many products are expected to work well for global audience not just with US English, i18n/L10n/g11n are generally hard. Due to omnipresent internet a software is now expected to be secure, modern memory safe runtimes like C# help substantially, but they aren’t a silver bullet, still lots of things to keep in mind.
I would expect the trend to continue in the future. Copilot helps writing particular functions, but that’s just a small part of making a good software.
But it doesn't make any sense to say "AI can make art, why would we need humans to make art anymore?" Making art is part of being human, and it doesn't matter if an AI can make 65,535 amazing, beautiful paintings or musical compositions or stories: it's not making the ones that I have inside my head and want to create.
And if a human can pick up an AI and say "I want an oil painting that looks like X, with Y style, and Z details" and have it produce that, then at the very worst, AI has become another tool for artists. I guarantee you that no AI will be able to take a description like that and produce exactly what an artist wants every time; thus, at the very worst, people who want to use AI as a tool in creating art will still want to use the AI's product as a base, and tweak it to more perfectly fit their vision. (And if, for instance, the artist is disabled and unable to hold a brush, then perhaps they will use the AI as a tool to make those tweaks, as well—thus opening up the ability to paint for people who have been unable to.)
But longer term if your job is taking written requirements from someone via email or slack, generating something and then throwing what you did back over the fence to whoever asked for it then you are in trouble.
If your job is a dialogue with the end user, where things aren't written down because they aren't clear or are unknown then it's going to be safe for quite a while longer.
So it naturally converges to a world, where humans don't do any hard work anymore.
I know, this needs some huge global sociologic shifts (looking at you, greediness) as well and more than 10 years, but in the end, this progress and it's end state is not threatening at all. And those shifts will have to happen sooner or later, since it's far from only software developer's jobs, that are threatened by automation.
The 'problem' with many developers is, that a lot of us actually enjoy what we do, making it seem less as hard work.
But look at it this way: In a world, where most of the jobs are automated and a global system of machinery supplies earth's demand for food, you'll have plenty of time, to do the things you enjoy, like learning how technology works or writing code without GitHub Copilot.
Using your SQL statement as an example, writing a query might be easy for a computer, but designing the table to hold the information in a way that's queryable is not. That's where the creativity lies. That's going to be very hard for a computer, or even a non-technical person using a computer, to replace.
The more interesting question will be the impact of globalization on salaries and competition. Logically either efficiency and value provided by high cost of living areas continues to increase (and dramatically), or lower cost of living areas accelerate. We're already seeing this in many parts of the world.
We should assume that over time that salaries (for tech) decrease in North America over time, or at least increase at a slower rate than those of the rest of the world. Similarly the salaries elsewhere will increase over time. While slow - this is the ongoing truism of economic migration.
Thats because you are starting with something simple to test it out. As you try to use it on bigger systems, the usefulness goes down.
In fact, it is barely useful when coding in a framework like Django.
OTOH, it is quite useful for repetitive tasks like writing a downloader function or an sql script.
When using external libraries though, the inventiveness of CoPilot can get really tricky, cause it can generate lines of code for the library API that sounds legit but is completely unrelated to the actual API. CoPilot also often adds such innocuous but wrong lines of code when autogenerating multiple lines. So it is always good to take CoPilot code with a huge grain of salt.
Overall though, it is quite useful for one off scripts. So I just treat it like an enthusiastic but highly unreliable assistant.
The caveat, of course, is that we need to, collectively (and not just "collectively as programmers/techies", but "collectively with all the others whose jobs may be made unnecessary by automation") fight to make sure that we do get those benefits, and they aren't taken by the people who own the companies that made the automation.
Remember, you and I have more in common with the infamous and widely-vilified homeless people on the streets of San Francisco than we do with people like Elon Musk and Jeff Bezos. Even if you're making $150k, $200k...even $1M/yr, you are closer in income to the poorest of the poor than you are to someone making even $1B/yr, let alone the tens of billions that those modern robber barons make. We need to start acting like it, and recognizing that we are better together, joined to improve humanity as a whole, than we are separately, stepping on each other in a desperate scramble to increase our personal slice of the pie.
Both futures will exist simultaneously.
This community gets to decide the which ratios of those two futures will comprise daily life. Because we are the ones who will build it.
Ai tools that assist, collaborate with and work alongside a human worker make that worker more than they are. This is utopia.
AI tools that replace humans and make them redundant are dystopian.
Build the former, not the latter. We can’t stop the future from unfolding, but we can carefully and thoughtfully nudge it. We do that by choosing our work carefully.
Does ai consume? Most of the HN readers live in a consumer society, driven by the wants of people-with-money. If this disappears, what’s going to pay for the toys and the ads?
Can ai innovate? I thought “ai” today is more of a pattern matching approach than the 5th generation/inference engines of the 80-90’s? If so, how can it expand beyond the underlying trends in its training set?
I don’t think that we reach soon a point where all worthwhile softwares are already written. It will be much more like building construction where you teardown and rebuild after some decades to get a modernised version.
While these are great at aiding a competent user, they are not tools for original and new creation. Someone will point me to the written article, drawn image and so on and claim originality, but it's all derivative, there isn't real experimentation, there isn't something as shocking and mind bending as we had when Jazz came into being, nothing as new as the stream of consciousness type of writing of Tristam Shandy or nothing as gut wrenching or disturbing as Zdzisław Beksiński's work.
We are, however, living in an age where lack of imagination is rampant. We are boring, unimaginative and we can't seem to come up with anything new. Everything we do, is similar to what "AI" does, we create derivation, nothing new or original. I must agree here with some of the things Jaron Lanier highlights in his books whereby we haven't had anything as good in a long time. Also why my examples are from quite a few years ago.
So I'm not worried. It's not intelligence and as far as preparing for it - expand your imagination. Expand your boundaries and feel more at ease being uncomfortable. These are tools and it's up to your imagination to push the envelope and create new things with the help of them.
---
LM - language model. Taken from a paper I wrote about NLP:
Gary Marcus acknowledges that the system manages to achieve impressive results by providing fluent answers to previously unseen questions. It sticks to topics well and it achieved surpassingly accurate behaviour (Marcus, G. 2020, para. 31). Despite this, Marcus criticizes a very important aspect of the system. Despite its vast database of information, it cannot extract meaning from the sentences it manages to present to questions the user asks.
GPT-2 will provide different answers of varying relevance to the same question. The answers it provides could indeed be conceived as potential continuations of the sentences presented, but sometimes the meaning is completely lost as the LM does not hold information on concepts and objects.
If GPT-2 is viewed as an example of what is possible when we have access to a very large database, modern neural networks can come up with general rules even when there is no guidance or supervision. In this respect, the achievement is quite extraordinary, as GPT-2 can provide answers that are meaningful with little to no background information. The simple fact that this is possible tabula rasa is quite impressive.
However, based on the examples provided by Marcus in his article, it is clear that all that we’re looking at is an extremely elaborate version of ELIZA1. The answers show that there is no substance to the information that is replicated by the LM – it simply doesn’t understand the information it provides the user with. The phrase that made me side with the points brought forth by Marcus in his article was the quote attributed to Ilya Sutkever the co- founder of Open-AI “If a machine like GPT-2 could have enough data and computing power to perfectly predict the next word, that would be the equivalent of understanding.” (Marcus, G 2020, para. 76)
I find the statement to be, if not incorrect then wildly superficial, as word prediction does not equate to understanding. One of the examples provided in the article highlights this even further:
A is bigger than B. B is bigger than C. Therefore A is bigger than _ B. A is bigger than B. B is bigger than C. Therefore A is bigger than _ which can also become a huge hit.
The two above are question and answer pairs taken from Marcus, G 2020 (Marcus, G. 2020, ‘GPT-2 and the Nature of Intelligence’)
Like most of them do today.
yo all will be “gig workers”