HACKER Q&A
📣 noduerme

Will creative people have jobs in 10 years?


I've been a skeptic of AI replacing top-level art directors, designers, illustrators, SWEs, etc. for a long time, because I think it can copy well but can't create or interpret.

Today was my first full day of coding inducted into Copilot. I know a lot of you folks have had the fun of playing with it for a long time.

I have a particular way of writing code after 25 years, certain idiosyncrasies. Just as a cheap shot example, I write mysql queries in PHP by defining the whole string first as a variable named $someQ = "SELECT * FROM table WHERE id=:id;" and then, rather than binding the variables, I prefer to compile the query as $someZ = $conn->prepare($someQ); and then $someZ->execute(array(":id"=>$id)).

This isn't a usual way of doing things. Copilot tried to bind the vars the first time I did it. The second time, it set up the array. The third time, when I wrote $insQ as the name of the an insertion string, and $insZ as the query that would execute it, Copilot wrote $updQ and $updZ by itself as the names of variables for the update string and update query. For the record, I checked and I've never named a variable $updQ or $updZ... I usually use $uQ and $uZ. It made that up. It stopped trying to bind vars and inserted them into the execute statement. It guessed contextually almost 100% which vars I wanted to bind. This was beyond magic; obviously it read a huge amount of unrelated code in my stack to come to those conclusions from this one small file. But no one besides me uses this naming convention.

It was magic; scary magic.

I spent my evening shooting pool and drinking with bar friends; one's an ex-marine who works for the government and another's a guy who manages ops and software for a small company. They're lamenting that it's impossible in this country for anyone to make enough to afford an apartment. This never seemed hard to me. I just wrote code and got paid a lot for being good at it. Or I made graphics that could probably now be done by DALL-E.

It's been my opinion for awhile that consumers and users of software were redundant. Maybe now we're redundant, too. But in that case, who is talking to whom? What human born today has the experience of struggling to learn something, and what would they accomplish if they did learn it if a giant cloud filter could already translate their thoughts into art or code that would take them years of experience to comprehend, let alone write for themselves?

Are we the last generation to learn skills? And what happens when everyone's skill set is just telling a cloud filter what to make for them? Will that be okay...?

I feel like this is fundamentally different from the shift from Assembly to C to BASIC to Java or the fact that coders now don't think in terms of the metal. This is not "clip art" taking over the illustration space.

I feel like I've woken up halfway through a wholesale replacement of all creative industries with robots, at a point where all consumers (or measurements of consumers) have been fully replaced by robots. Having been in a niche for so long, I ignored it and thought that it would never be a real threat. I'm admitting now that it is.

Where do we go from here? How do we avoid a dead creative class spamming dead code and dead art to an already-dead consumer internet?


  👤 m-i-l Accepted Answer ✓
All the AI algorithms at the moment need to be trained with human input, e.g. Copilot works on largely human written code, GPT-3 is 60% Common Crawl[0], DALL-E uses 400 million images scraped from the internet[1], etc. I suspect weird things like feedback loops will start to happen if you train generative AI algorithms on the output of generative AI algorithms rather than human input.

Worth noting that people have had these sorts of worries for over 200 years - see e.g. the Luddites[2] and Luddite fallacy[3]. While new forms of technology do make certain forms of employment obsolete, they also introduce many new forms, e.g. who would have thought 20 years ago that there would be people making a living from being a "social media influencer", or even writing and maintaining software like Copilot.

[0] https://en.wikipedia.org/wiki/GPT-3

[1] https://en.wikipedia.org/wiki/DALL-E

[2] https://en.wikipedia.org/wiki/Luddite

[3] https://en.wikipedia.org/wiki/Technological_unemployment#The...


👤 mihaic
There is one important thing Copilot doesn't do yet, and that's to say: "I can implement that, but is there a chance you wanted to solve a different problem here, which might be Y?"

One of the criteria I have for a professional is to understand how their work fits into the wider context, and be able to juggle multiple vantage points. Most developers now are amateurs that just stitch together code from npm, Stack Overflow and random github repos to create unmaintainable messes. Their work might be made redundant, or even sadder it might be that quality will become even more irrelevant, and all that matters is how well developers can sell themselves.

Whatever might happen, solid fundamentals and knowing how to best abstract the real world with minimal complexity will still be useful in the next 10-20 years.


👤 jzellis
I've been a writer and musician my entire life, since I was a kid. I'm also a professional coder and I have a decent understanding of how things like neural net modeling work, and the intermediate concepts of both strong and weak AI.

And when you find me an AI that can do this, I'll happily concede the goddamn planet to them.

https://youtu.be/N8uk9Ql8J-A


👤 Copenjin
The creative part of programming is not writing code, is defining and simplifying the business logic of what you need to do and troubleshooting issues, both things require real reasoning, huge domain knowledge, gut feelings, an investigative approach, etc...

Creative jobs will be the only ones that will remain one day, but the meaning of "creative" could be redefined or expanded in the coming decades.


👤 idontwantthis
I'm so confused when people say copilot is magical, or even particularly helpful. I used it for a few days, and it would offer interfaces that sounded right, but didn't exist, or it would try to implement an entire function for me that just trailed off into nonsense.

It was best at writing comments on a function that had already been written.

There were one or two times that it guessed correctly at code I wanted.


👤 DamonHD
My broad view is that each time another "magic" tech comes along, eg macro assemblers, high-level compilers, high-level compilers with proof-based whole-system optimisation, ... etc ... the best human contribution just moves up the abstraction level a bit. And given the maxim that (in this case) one can write about the same number of debugged lines of code at each level, the result is more interesting and productive over time.

We're not dead/redundant yet, IMHO.


👤 orangepurple
We are already at the point where if you want to implement something socially impactful you need buy-in and investment from billionaires because the relative costs of opportunity, living, and relative level of taxation is absurd. We're heading full speed ahead into a new form of feudalism. If you want a good laugh read about the Roman tax system and see how people had it in the old days.

👤 wruza
Are we the last generation to learn skills?

It would be nice actually to not have to have skills only tangentially or coincidentally relevant to the task. Our skills are often a thing in itself, e.g. you may know how to write SQL, but what you really want is to run (or to help run) a business. Is writing SQL a skill? Yes. Is it relevant to what you really want? Barely. The same for washing brushes, setting up routers, cutting vegetables.

We drown in ourself invented complexity and are scared of the perspective of living a true life. Those who were sleeping cogs will wake up. Those who want to do SQL will do it anyway, as a hobby. But what will they do to make a living? Haha, why would we bigcorps and elites even care. You’ll do something, maybe. We just have to make sure you can’t riot.


👤 Kazik24
I think it's a matter of perspective. Creativity as an ability to create something new will always be relevant. At the core concept, creative ideas are just ideas, and something needs to translate them into the real world. Right now we are learning skills to do this. In the future, AI could do it for us.

In that case, there is an information bottleneck. If you, as a single entity, have an idea and skills needed to translate it, then internal communication in your brain is very fast. But when we as a creator have an idea and describe it to AI (or even other humans), there is some very lossy compression going on.

Creative jobs in 10 years will probably shift to focus more on being able to communicate those ideas well to "execution units" and back, so the skill to learn would be mapping some internal "mindset" of an AI to predict outcomes.


👤 makerofthings
I have a theory. I see intelligence as existing on a scale. For any task, you need to be at or beyond a certain point on that scale. At some point on that scale lies the ability to question what you’re doing and maybe not want to do it. I think for things like coding or self-driving cars, you can get about 95% of the way without crossing that line, but to get human-good you need to move far enough along that you have sentience (sapience?), at which point you have the issue that your system might not want to write code all day for free. I don’t think you can have a system that is as good a software engineer as a human or as good a driver, without getting something alive enough to maybe have other ambitions.

👤 burntoutfire
How will co-pilot understand the requirements that need to be implemented? What you described sounds like better autocomplete rather than a system that can one day replace us.

👤 Beltiras
Even if copilot could manage the things you fear it can (it can't) then you just move the post one bar further. Now you need people that can describe software in terms copilot (or DALL-E or GPT3 or whichever new software we make next) can build something useful from.

👤 2000UltraDeluxe
People asked the same about graphical designers when the first graphical Word processors and clip-art on CDROM hit the shelves. Some things will change. Others won't. Both will result in new opportunities.

👤 Const-me
I’m a software engineer.

Due to the tools becoming better, I already had to unlearn a few skills. For instance, 20 years ago we had to write things like ( x >> 4 ) instead of ( x / 16 ), or use tricky fixed-point math instead of floats. Both were performance optimizations: compilers were less than brilliant (modern ones are reliably optimizing integer divisions into these bitwise tricks), and CPUs were much slower (modern processors are faster at multiplying or dividing floats than integers, 20 years ago was the opposite with a huge difference).

It's not just compilers and hardware, other software too. Debuggers, profilers, operating systems, third-party libraries are becoming strictly better and more capable over time.

However, I would not say the job became easier.

The software being developed became way more complicated overall: more features, and more complicated features. For instance, 2D GUI is now expected to include animated transitions, inertial scrolling, anti-aliased fonts with hinting and supports for these modern colored hieroglyphs, vector graphics instead of bitmaps for DPI scaling.

The hardware and environment are more complicated too. All CPUs have SIMD now, and with modern GPUs I’m often programming two different chips in parallel instead of just 1, orchestrating the interaction between them. Modern networking and cloud stuff is remarkably tricky to use. Many products are expected to work well for global audience not just with US English, i18n/L10n/g11n are generally hard. Due to omnipresent internet a software is now expected to be secure, modern memory safe runtimes like C# help substantially, but they aren’t a silver bullet, still lots of things to keep in mind.

I would expect the trend to continue in the future. Copilot helps writing particular functions, but that’s just a small part of making a good software.


👤 Ekaros
There is still lot of things to define and lot of things to define to verify the results. That work won't get much simpler or go away. We are nowhere near point where you could just tell AI to do something and then get out ready working product.

👤 danaris
When AI/automation can do basic jobs—things like planting and harvesting food, building houses, checking you out at the supermarket—there's reason to believe that people may no longer be needed to do those jobs. This even extends to most business-logic type computer programming.

But it doesn't make any sense to say "AI can make art, why would we need humans to make art anymore?" Making art is part of being human, and it doesn't matter if an AI can make 65,535 amazing, beautiful paintings or musical compositions or stories: it's not making the ones that I have inside my head and want to create.

And if a human can pick up an AI and say "I want an oil painting that looks like X, with Y style, and Z details" and have it produce that, then at the very worst, AI has become another tool for artists. I guarantee you that no AI will be able to take a description like that and produce exactly what an artist wants every time; thus, at the very worst, people who want to use AI as a tool in creating art will still want to use the AI's product as a base, and tweak it to more perfectly fit their vision. (And if, for instance, the artist is disabled and unable to hold a brush, then perhaps they will use the AI as a tool to make those tweaks, as well—thus opening up the ability to paint for people who have been unable to.)


👤 nl
10 years is radically too short a timeframe for a change like this.

But longer term if your job is taking written requirements from someone via email or slack, generating something and then throwing what you did back over the fence to whoever asked for it then you are in trouble.

If your job is a dialogue with the end user, where things aren't written down because they aren't clear or are unknown then it's going to be safe for quite a while longer.


👤 lagrange77
Just my two (oversimplified) cents: Technology has always been created to free humans from hard work. This has evolved from early primitive tools, over mechanisation, industrialisation and automation. Yes, i know, nowadays profit is the main driver for technologic advancements, but this is rooted in the monetary cost of human work.

So it naturally converges to a world, where humans don't do any hard work anymore.

I know, this needs some huge global sociologic shifts (looking at you, greediness) as well and more than 10 years, but in the end, this progress and it's end state is not threatening at all. And those shifts will have to happen sooner or later, since it's far from only software developer's jobs, that are threatened by automation.

The 'problem' with many developers is, that a lot of us actually enjoy what we do, making it seem less as hard work.

But look at it this way: In a world, where most of the jobs are automated and a global system of machinery supplies earth's demand for food, you'll have plenty of time, to do the things you enjoy, like learning how technology works or writing code without GitHub Copilot.


👤 onion2k
Copilot may well be able to write the code but it can't decide what code needs to be written.

Using your SQL statement as an example, writing a query might be easy for a computer, but designing the table to hold the information in a way that's queryable is not. That's where the creativity lies. That's going to be very hard for a computer, or even a non-technical person using a computer, to replace.


👤 mbgerring
It’s more likely that individual creatives will be capable of much more than they are now and art will become more ambitious as a result.

👤 cik
A similar sentiment was expressed when Eli Whitney introduced the cotton gin. It was massively disruptive and work changed from individuals at home to (global) collectives and factories where clothes are created. There are countless other examples, but that is by far my favourite.

The more interesting question will be the impact of globalization on salaries and competition. Logically either efficiency and value provided by high cost of living areas continues to increase (and dramatically), or lower cost of living areas accelerate. We're already seeing this in many parts of the world.

We should assume that over time that salaries (for tech) decrease in North America over time, or at least increase at a slower rate than those of the rest of the world. Similarly the salaries elsewhere will increase over time. While slow - this is the ongoing truism of economic migration.


👤 sharmi
This is just the initial impression and it sure does blow you away.

Thats because you are starting with something simple to test it out. As you try to use it on bigger systems, the usefulness goes down.

In fact, it is barely useful when coding in a framework like Django.

OTOH, it is quite useful for repetitive tasks like writing a downloader function or an sql script.

When using external libraries though, the inventiveness of CoPilot can get really tricky, cause it can generate lines of code for the library API that sounds legit but is completely unrelated to the actual API. CoPilot also often adds such innocuous but wrong lines of code when autogenerating multiple lines. So it is always good to take CoPilot code with a huge grain of salt.

Overall though, it is quite useful for one off scripts. So I just treat it like an enthusiastic but highly unreliable assistant.


👤 danaris
We programmers are kind of used to being on the top of the heap. As a category, we've received extremely high compensation for a while now, and it's natural for those in such a position to feel some fear that automation will mean they're no longer able to demand a disproportionate amount of resources be allocated to them. But what we really should be doing is celebrating that we, too, may be able to join the ranks of those whom automation has liberated from having to work at something every day just to be allowed to continue to exist on this planet.

The caveat, of course, is that we need to, collectively (and not just "collectively as programmers/techies", but "collectively with all the others whose jobs may be made unnecessary by automation") fight to make sure that we do get those benefits, and they aren't taken by the people who own the companies that made the automation.

Remember, you and I have more in common with the infamous and widely-vilified homeless people on the streets of San Francisco than we do with people like Elon Musk and Jeff Bezos. Even if you're making $150k, $200k...even $1M/yr, you are closer in income to the poorest of the poor than you are to someone making even $1B/yr, let alone the tens of billions that those modern robber barons make. We need to start acting like it, and recognizing that we are better together, joined to improve humanity as a whole, than we are separately, stepping on each other in a desperate scramble to increase our personal slice of the pie.


👤 more_corn
The AI apocalypse is coming. We will replace jobs, make people refund and automate many things. The AI utopia is possible too though, where AI makes you fabulously better at your job, prevents error, makes you more productive and more clever.

Both futures will exist simultaneously.

This community gets to decide the which ratios of those two futures will comprise daily life. Because we are the ones who will build it.

Ai tools that assist, collaborate with and work alongside a human worker make that worker more than they are. This is utopia.

AI tools that replace humans and make them redundant are dystopian.

Build the former, not the latter. We can’t stop the future from unfolding, but we can carefully and thoughtfully nudge it. We do that by choosing our work carefully.


👤 jleyank
Couple of thoughts: does ai debug? While coding takes time, I spent way more debugging and trying to suss out code to fix it. While this can be a typo, it’s more often a logic or domain error that has to be found.

Does ai consume? Most of the HN readers live in a consumer society, driven by the wants of people-with-money. If this disappears, what’s going to pay for the toys and the ads?

Can ai innovate? I thought “ai” today is more of a pattern matching approach than the 5th generation/inference engines of the 80-90’s? If so, how can it expand beyond the underlying trends in its training set?


👤 frnkng
My guess would be that we will have much more software developers than we have now. This technology reduces cost of development so more development will happen. Maybe much much much more. Everyone who is now underserved due to high prices will have custom software development in-house. Why: because it gets much cheaper in terms of man hour to write software.

I don’t think that we reach soon a point where all worthwhile softwares are already written. It will be much more like building construction where you teardown and rebuild after some decades to get a modernised version.


👤 davidhunter
Move up another level of abstraction. You don’t need to learn how to make a quill to write a book.

👤 anon2020dot00
“The very least you can do in your life is figure out what you hope for. And the most you can do is live inside that hope. Not admire it from a distance but live right in it, under its roof.” – Barbara Kingsolver

👤 jason0597
No I don't think so. People still pay a pretty penny to watch live music, and the human and emotional interaction with an artist just can't be matched.

👤 pSYoniK
I think there is a big problem in how people view AI. It's not artificial or intelligence. It is a powerful "structure" or tool for pattern discovery. A good read might be a critique of recent advances within natural language processing. The Guardian even published an article "written by AI" but all of these things hide the fact that there isn't understanding associated with these structures. The same way a car goes from A to B faster than you would, it doesn't mean it holds any understanding of WHY you would travel, it doesn't hold any understanding on HOW it should choose the path and so on.

While these are great at aiding a competent user, they are not tools for original and new creation. Someone will point me to the written article, drawn image and so on and claim originality, but it's all derivative, there isn't real experimentation, there isn't something as shocking and mind bending as we had when Jazz came into being, nothing as new as the stream of consciousness type of writing of Tristam Shandy or nothing as gut wrenching or disturbing as Zdzisław Beksiński's work.

We are, however, living in an age where lack of imagination is rampant. We are boring, unimaginative and we can't seem to come up with anything new. Everything we do, is similar to what "AI" does, we create derivation, nothing new or original. I must agree here with some of the things Jaron Lanier highlights in his books whereby we haven't had anything as good in a long time. Also why my examples are from quite a few years ago.

So I'm not worried. It's not intelligence and as far as preparing for it - expand your imagination. Expand your boundaries and feel more at ease being uncomfortable. These are tools and it's up to your imagination to push the envelope and create new things with the help of them.

---

LM - language model. Taken from a paper I wrote about NLP:

Gary Marcus acknowledges that the system manages to achieve impressive results by providing fluent answers to previously unseen questions. It sticks to topics well and it achieved surpassingly accurate behaviour (Marcus, G. 2020, para. 31). Despite this, Marcus criticizes a very important aspect of the system. Despite its vast database of information, it cannot extract meaning from the sentences it manages to present to questions the user asks.

GPT-2 will provide different answers of varying relevance to the same question. The answers it provides could indeed be conceived as potential continuations of the sentences presented, but sometimes the meaning is completely lost as the LM does not hold information on concepts and objects.

If GPT-2 is viewed as an example of what is possible when we have access to a very large database, modern neural networks can come up with general rules even when there is no guidance or supervision. In this respect, the achievement is quite extraordinary, as GPT-2 can provide answers that are meaningful with little to no background information. The simple fact that this is possible tabula rasa is quite impressive.

However, based on the examples provided by Marcus in his article, it is clear that all that we’re looking at is an extremely elaborate version of ELIZA1. The answers show that there is no substance to the information that is replicated by the LM – it simply doesn’t understand the information it provides the user with. The phrase that made me side with the points brought forth by Marcus in his article was the quote attributed to Ilya Sutkever the co- founder of Open-AI “If a machine like GPT-2 could have enough data and computing power to perfectly predict the next word, that would be the equivalent of understanding.” (Marcus, G 2020, para. 76)

I find the statement to be, if not incorrect then wildly superficial, as word prediction does not equate to understanding. One of the examples provided in the article highlights this even further:

A is bigger than B. B is bigger than C. Therefore A is bigger than _ B. A is bigger than B. B is bigger than C. Therefore A is bigger than _ which can also become a huge hit.

The two above are question and answer pairs taken from Marcus, G 2020 (Marcus, G. 2020, ‘GPT-2 and the Nature of Intelligence’)


👤 Vladimof
Ideally, in 10 years, no-one will have "jobs".

👤 brudgers
Creative people can always get day jobs.

Like most of them do today.


👤 daedlanth
Don't do it; don't put up with it.

👤 ushakov
in 10 years there will be no jobs

yo all will be “gig workers”


👤 emptyfile
Nonsense.