Is there really only joy in programming stuff if you think what you're doing is valuable to others?
I thought it was the opposite: because I don't need to be reliable with a side project, then I can take the time to do things in a slow or impractical way, as tidily as I like, or do it however I want to. -- It's a chance to play with whatever.
I'm finding it quite fun to play around with non-LLM stuff.
I think it's also quite fun to play around with LLM related stuff, and to see what it can do, and what it can't do.
The other thing is LLMs are not a end but a means to an end. For instance I wanted to help my daughter learn typing. I forked a typing tutor and ripped out the boring backend dictionary based challenge generator and wired it up to a llama and stable diffusion and made it generate Minecraft stories with illustrations based off a LLM and SD lora I created for her to practice with (there’s a lot of Minecraft fan fic and she loves Minecraft books). It was a lot fun, she learned typing, and I learned enough about LLM and image generation to not fear them as tools but realize they’re pieces in a more powerful toolbox to be harnessed towards our end. This, btw, is why open source AI is critical.
Projects with good UX will always be needed, start hacking!
I think they are going to be a really useful tool alongside many other really useful tools which already exist. I think that they will have an impact on many areas where correctness is not very important. I don’t think they will make everything else obsolete, and this line of thinking seems like some kind of weird utopian thought pattern bug in humans.
So if you want to help your anxiety, just become extremely skeptical of LLMs! Practice rolling yours eyes, etc.
However, I don't think that is a reason to not build - if anything it is a reason to build faster and just believe that you will be able to adapt to whatever happens in the future. And on a personal note, a part of me envies people who aren't actively working on projects using LLMs -- you guys are the ones who are more likely to stumble across unique problems to solve with AI that most of us just didn't see.
LLM's make me question if the goal I am moving 'to' will even be mine by the time I get there.
Nobody likes working on a sandcastle and discovering Steve next door built the exact same sandcastle as you, halfway through your build.
It has nothing to do with money or praise. It remains to be seen if LLM's can build ALL sandcastles before I even finish my one... that is the anxiety.
The upside of LLM's is that they should make software blackboxes much easier to make.
LLMs burst into more people's consciousness last year, but so did Stable Diffusion, so I have been finding neural networks in general more interesting. Once I get a better grasp of backpropagation etc. I'll see if I want to move on to learning about transformers and LLMs, or latent diffusion models, or both, or maybe some other type of neural network.
I just got a binary counter working... it's that basic right now.
Well for me, all of my side projects are "worthless". Fun, but worthless. I guess that makes me worry less, although burnout usually gets to me before AI anxiety gets a chance.
So what if a machine can write a novel? I don't even want to read it.
Human intention is what gives meaning and value to my life.
The things that I want to build need me to make them. Why would I wait for a machine to do it later without my imperfect human perspective?
Dive into more exotic languages. Maybe LLMs will make simple Python scripting obsolete, but they will be awful at writing, say, OpenCL/Vulkan kernels, engine specific scripts, SIMD intrinsics, triton code and such for some time.
I worked on Tanimoto similarity today. It was cool: https://www.featurebase.com/blog/tanimoto-similarity-in-feat...
https://tildes.net/~comp/16ab/what_programming_technical_pro...
I'm going to have to do a real write up on it thanks to the increasing interest. Also I am making things like
https://mastodon.social/@UP8/110542853660777197
and am thinking now about how to make 3-d worlds around them. To get ready for Apple Vision development I went on Ebay and got myself a Hololens 1! I've also got people trying to get me to start another project I was talking about six months ago. So I'd better quit playing Elden Ring and get to work.
LLMs aren't useful. They don't solve problems. They don't substitute for real software. They don't threaten our jobs. They are absolutely irrelevant to my life. Why do you see them as relevant to yours?