I have shipped a few projects, I always review AI-suggested code, do daily coding practice without AI, watch youtube videos, etc. but still don't know if I'm striking the right balance or whether I can really call myself a programmer.
I often see people say that the solution is to just fully learn to code without AI, (i.e, go "cold turkey"), which may be the best, but I wonder if the optimal path is somewhere in between given that AI is clearlly changing the game here in terms of what it means to be a programmer.
I'm curious how you have all handled this balancing act in the past few years. More concretely, what strategies do you use to both be efficient and able to ship / move quickly while ensuring you are also taking the time to really process and understand and learn what you are doing?
But here is my advice. Learning by doing with AI seems akin to copying source from one location (I.e. view source, stackoverflow).
My tips:
- Understand all of the code in a commit before committing it (per feature/bug).
- Learn by asking AI for other ways or patterns to accomplish the something it suggests.
- Ask Claude Code to explain the code until you understand it.
- If code looks complex, ask if it can be simplified. Then ask why the simple solution is better.
- Tell AI that you’d like to use OOP, functional programming, etc.
One way to measure if you’re learning is to pay attention to how often you accept AI’s first suggestion versus how many times you steer it in a different direction,
It’s really endless if your mindset is to build AND learn. I don’t think you need to worry about it based on the fact you’re here asking this question.
I recognised that my weaknesses are more in understanding the mathematical fundamentals of computation, so now I’m mostly studying maths rather than coding, currently linear algebra and propability theory. Coding is the easy part I’d say. Hopefully I get to concentrate on the study of my sworn enemy, algorithms, at some point.
I’d like to be able to do low-code and graphics/sound -programming some day. Or maybe use that knowledge some other cool stuff, if we are all replaced by robots anyway.
Versus someone or something giving you the, or even several, correct answers and you picking one. You are given what works. But you don't know why, and you don't know what doesn't work.
Learning from AI coding probably is somewhere between traditional coding and just reading about coding. I'm not sure which one it's closer to though.
However, it may not be necessary to learn to that depth now that AI coding is here. I don't really know.
That said, learning the fundamental topics can limit your thinking first if they feel difficult, so it's an interesting question how to keep the naïve creativity of the beginner that's something that can really help when building with AI because there are less limitations in your thinking based on how things used to be.
Probably in 10y from now, it will be a flex if someone is building or doing stuff without using AI, just like now if you are using a manual screwdriver instead of impact driver or actually going to the library to research a topic instead of googling it.
I appreciate this will be a deeply controversial statement here. As someone who's been coding for 25+ years and has some part of my identity in my ability to code this hurts and wounds me, but it is sadly true. The skills I've built and honed have little value in this new market. This must be how musicians felt when radio, records etc came about. My craft has been commoditized and it turns out nobody cared about the craft. They are happy listening to canned music in restaurants. Musicians are now like zoo animals where people pay an entry fee to see them for the novelty value. I exaggerate to illustrate the shift but part of me fears this might be more analogous than I dare to understand.
Code is about providing value to a business not in the lines of code themselves. Code is a means to an end.
If you want to understand coding for your own intellectual and hobbyist pursuit then please do. Generations of autistic-leaning people have found satisfaction doing so - but don't do it thinking it will remain a rewarding career.
Side note, I'm assuming you find joy in programming. If you don't, there's better ways to spend your time.
Before the AI era, I didn’t know much bash, but I was a reasonably OK programmer besides that I think. I found by getting AI to write me a lot of bash scripts and following along and then making edits myself when I needed small things changed I ended up with the ability to write bash now, and actually kind of appreciated as a language where as before I thought it was confusing. YMMV
Like anything with enough dedication you can achieve what you want.
I think this is the correct answer. Also we technically never stop learning. There's always some new coding trick that alluded us until AI spits it out.
My 2 cents: Switch to chat mode from agent mode and have better chats about approaches to code. I'm constantly challenging AI to explain its code, including talking about pros and cons of this or that method, and even the history of why certain new features were brought to javascript for example. It's also fun to query the AI about performance optimisation, presuming we all want the least amount of cycles used for the given procedure.
You do React + Redux or any other framework and feel like a lot of decision have been made for you without grasping the actual reasoning for these decisions.
The best learners I have encountered and for a year, I am trying to implement:
Learn the platform you develop for on a side project. You develop for the web and more on the programming side: Learn JS for Web and HTML. You will encounter state management, animations, events, DOM manipulation etc.. Solve them without libraries, first.
Because of that I learned Lisp so I could do metaprogramming, because manually it would take multiple humans lives to be able to create what I want before I die, even (or specially) controlling a small group of people.
We use Claude code and personally I love it. It is like the electric sawmill instead of humans cutting manually, sweating and being exhausted after half an hour of work.
After decades programming I know how to tell the AI what I want, and how to control/log/assert/test/atomize everything so it behaves.
You can use AI to teach you programming, the problem is that you need to tell her what you want, and if you are not experience you don't.
So do small projects and let the AI do 80% of the work, spend the remainder 20% finishing by hand. Usually LLMs are amazing for approaching valid solutions but are really bad at making something perfect. They fix something and destroy something else. You do that manually.
But "learning programming" in abstract is like "learning to drill", why do you want to drill? What do you want to drill? Where do you want to drill?
You need to make it specific into specific projects, with specific timelines. "I want to play the piano" is abstract. "I want to play this version of Feather theme in 3 months" is specific.
In my view AI tools are a sort of super-advanced interactive documentation. You can learn factual information (excluding allucinations) by either asking or looking at the generated code and explanations of it. But in the same way documentation alone was not a sufficient learning tool before, AI is not now.
What AI cannot give you and I suggest you to learn through other resources:
- algorithmic proficiency, i.e. how to decompose your problems into smaller parts and compose a solution. You don’t necessarily need a full algorithms course (even though you find good ones online for free) but familiarising with at least some classical non-trivial algorithm (e.g. sorting or graph-related ones) is mind-changing.
- high-level design and architecture, i.e. how to design abstractions and use them to obtain a maintainable codebase when size grows. Here the best way is to look at the code of established codebases in your preferred programming language. A good writer is an avid reader. A good programmer reads a lot of other people’s code.
- how programming languages work, i.e. the different paradigms and way of thinking about programming. This lets you avoid fixing on a single one and lets you pick the right tool for each task. I suggest learning both strongly-typed and dynamic languages, to get the feeling of their pros and cons.
That’s an incomplete list from the top of my mind.
You can still use AI as a tool in learning these things, but good old books and online resources (like Coursera) worked really well for decades and are not obsolete at all.
And the last thing is the most important: curiosity about how things work and about how to make them better!
I try to write whatever code, let’s say a function, by hand. It probably won’t work so I just have the LLM Socraticaly ask me questions about why it’s not working and then I try to fix it by hand and keep repeating this until the function works and does what I want.
If there is some more difficult to understand concept I write a small article to try to explain it and have the LLM explain me though my trial and error of writing until my paragraph or essay or whatever amount of writing needed to explain what I’m trying to learn is complete.
I bust my ass getting software written by hand using The Book and the API reference. Then I paste it into an LLM and ask it to review it. I steal the bits I like. The struggle is where we learn, after all.
I also bounce ideas off LLMs. I tell it of a few approaches I was considering and ask it to compare and contrast.
And I ask it to teach me about concepts. I tell it what my conception is, and ask it to help me better understand it. I had a big back and forth about Rust's autoderef this morning. Very informative.
I very, very rarely ask it to code things outright, preferring to have it send me to the API docs. Then I ask it more questions if I'm confused.
When learning, I use LLMs a lot. I just try to do it to maximize my knowledge gain instead of maximizing output.
I'm of the belief that LLMs are multipliers of skill. If your base skill is zero, well, the product isn't great. But if you possess skill level 100, then you can really cook.
Put more bluntly, a person with excellent base coding skills and great LLM skills with always outperform, significantly, someone with low base coding skills and great LLM skills.
If I were writing code for a living, I'd have it generate code for me like crazy. But I'd direct it architecturally and I'd use my skills to verify correctness. But when learning something, I think it's better to use it differently.
IMHO. :)
I don't see why even with Ai you won't need to have a solid understanding of the parts of computing programing is built on top of.
Even if your prompting, you need to know what to prompt for. How are you going to ask it to make it faster if you don't know it can be faster, or if you waste time on trying to make something faster that can't be?
Go through something like cs classes from MIT and do the work.
My 2 cents: read the actual docs, these days docs are exceptional. Rustlang offers a full fledged book as part of their docs. Back when Go was launched and their docs wrre inadequate and I had started to write a short github based "book" for newbies, and it did well (looking at the github stars)
Learn without AI, be an expert. And then use AI to write the code.
Using AI to learn is honestly delusional. You don't learn when AI writes the code for you. Also for a new language it'll take some time for us yo get used to the syntax - hence writing by hand until you become an expert.
The goal of writing software for your job is to write it within that sprint.
But for hobby at least you can take time & learn
Although I'd recommend to get into depth for whatever tools you are going to use at your job without AI because who knows, maybe your next company won't allow you to use AI!
A career in software development 30+ years later, and I'm back learning from day one again, because LLMs are profoundly changing how we do this.
Example: two years ago, I built a website as an MVP to test a hypothesis about our customers. It took me 6 weeks, didn't look good, but worked and we used it to discover stuff about our customers. This week I've vibe-coded a much better version of that MVP in an afternoon. This is revolutionary for the industry.
The state of the art on LLM coding is changing fast and by orders of magnitude. They still get things wrong and screw up, but a lot less than they did a year ago. I fully expect that in a couple of years [0] writing code by hand will be completely archaic.
So, what does this mean for people learning to code?
Firstly, that hand-rolling code will become artisanal, a hobby. Hand-coding a program will become like hand-carving a spoon; the point is not to produce the best spoon in the most efficient manner, but to create Art.
Secondly, that commercial coding as a career will revolve around collecting business requirements, translating them into prompts, and orchestrating LLM code engines. This will be a cross between "Product Manager", "Project Manager", and "Solution Architect" in current role definitions.
Thirdly, that at least for next few years, understanding how code actually works and how to read it will be an advantage in that commercial career space. And then it'll be a disadvantage, unnecessary and potentially distracting. Soft social skills will be the primary factor in career success for this profession in the future.
The industry has been through similar changes before. Most obviously, the invention of compilers. Pre-compiler, programmers wrote machine code, and had to manage every single part of the operation of the computer themselves. Need a value from memory for an operation? You had to know where it was stored, clear a register to receive it, fetch it, and work out where to store the result. Post-compiler, the compiler managed all of that, and we were able to move to high-level languages where the actual operation of the computer was a couple of abstraction layers below where we're thinking. We no longer need to know the actual physical memory address of every value in our program. Or even manage memory allocation at all. The compiler does that.
And yes, there was a generation of programmers who hated this, and considered it to be "not real programming". They said the compilers would write worse, less efficient, programs. And for years they were right.
So, to answer your question:
> AI is clearlly changing the game here in terms of what it means to be a programmer.
> More concretely, what strategies do you use to both be efficient and able to ship / move quickly while ensuring you are also taking the time to really process and understand and learn what you are doing?
Embrace the change. Learn to manage an LLM, and never touch the code. Just like you're not writing machine code - you're writing a high-level language and the compiler writes the machine code - the future is not going to be writing code yourself.
Good luck with it :)
[0] There are lots of questions around the finances and sustainability of the entire LLM industry. I'm assuming that nothing bad happens and the current momentum is maintained for those couple of years. That may not be the case.
Now, step back and do some serious introspection. First, look around. Odds are you are surrounded by imposters, irrespective of AI. Identify who the imposters are and isolate yourself from them. Secondly, solve hard problems. Confidence comes from learning. If you cannot perform without AI then your learning is low.
At the end of the day you provide solutions to problems. If you cannot do that almost instantly in your mind, forming a vision, you are less valuable than someone who can.
A carpenter uses tools to shape wood into furniture. Each tool in the toolbox has different uses, but some are more efficient than others. For example, a table saw lets the carpenter cut more quickly and accurately than a hand saw. Nobody would say "that's not a real carpenter, he cheats by using a table saw".
A carpenter can also have an assistant (and I'm specifically not talking about an apprentice) who can help with certain tasks. The assistant might be trained by someone else and know how to perform complex tasks. When the carpenter builds something with the assistants help, is that considered a team effort? Does the carpenter need to take responsibility for the assistants mistakes, or the trainer? Who gets credit for the work?
I don't have answers for these questions, but I think the parallel to software is straightforward: we have a new tool (assistant) that's available, and we're trying to use it effectively. Perhaps it's going to replace some of our older tools, and that's a good thing! Some of us will be lazy and offload everything to it, and that's bad.
I do think that learning the fundamentals is as necessary as ever, and AI is a great tool for that as well.
(Disclaimer: I've been programming for about 15 years, and haven't integrated AI into my workflow yet.)
But feel free to call yourself a programmer, I'm not going to gatekeep it :)
You can learn both of these quickly to a deep level with only 2 books. For the calculus of computation, read "Structure and Interpretation of Computer Programs" (SICP) by Abelman and Sussman. It is available for free.[0] If you understand all of this book, you understand all of the fundamentals of computer science. Every program you write from now on will be understandable to you, with enough persistence. But most importantly you will be able to think in computer programs by second nature, and communicate in this language. And when you talk to AIs in this language, they become exceedingly precise and powerful, because you lost the ambiguity in how you are conceptualizing the program.
For the engineering of computation, read "The Elements of Computing Systems" by Nisan and Schocken.[1] An abridged version of this book is available for free in the form of the Nand2Tetris course. In this course you will start with an imminently simple digital construct, and use it to build step by step a full working computer that can run Tetris. You could even write a Lisp from SICP on this computer, and pretty easily too as you'll see in SICP itself! Once you have completed both books you'll have met in the middle between abstract computer science and concrete computer science: coding.
Just like in your math class, you can see that one side of a right triangle is always longer than the others, but you cannot understand how or why or explain it or work with it until you can comprehend some simple theorems and functions, you cannot truly compose computer programs until you can speak the language of computer science. It used to be that you could make a career by copying code you saw online, patching bits and piece together to create basically working code. But that era is over. AI reads and writes and searches millions of times faster than you. But still only humans are capable of new compositions. But in order to create these new compositions you have to be able to speak a mutual language that you and the AI can understand. That language is computer science, and it hasn't changed since time began and it won't change in 10, 100 or 1,000 years from now when AI is capable of doing anything and everything better than we can. So if you want to stop struggling and start creating new and exciting things with computers, read these 2 books!
[0] https://mitp-content-server.mit.edu/books/content/sectbyfn/b...
The AI tools are incredibly helpful (and people who say otherwise are disengenuous), but if you don't already roughly know how you want to implement something and you let AI take the wheel, you aren't going to learn anything. From a learning standpoint I feel like the best approach is to plan and write your code without using AI at all, then maybe use it as a critic to give feedback on what you've done.
If someone were to ask me how to learn to code today. I dont know what I would say. My gut is telling me to say go into farming or marketing. vs ALL-IN. Down the rabbit hole you go.
>I often see people say that the solution is to just fully learn to code without AI, (i.e, go "cold turkey"), which may be the best, but I wonder if the optimal path is somewhere in between given that AI is clearlly changing the game here in terms of what it means to be a programmer.
You need to know the fundamentals. You probably still need to learn to code manually. Thonny is the best beginner IDE.
You will want to switch over to AI pretty quick though. As a beginner, you probably want GUI, so Antigravity, Codex, or Claude Code.
>I have shipped a few projects, I always review AI-suggested code, do daily coding practice without AI, watch youtube videos, etc. but still don't know if I'm striking the right balance or whether I can really call myself a programmer.
There is no license. You can call yourself that whenever you please. Here's a relevant hacker news blog though: https://paulgraham.com/identity.html
>I'm curious how you have all handled this balancing act in the past few years. More concretely, what strategies do you use to both be efficient and able to ship / move quickly while ensuring you are also taking the time to really process and understand and learn what you are doing?
Once you really get your AI coding skill up and you know your model. You have the rocksolid architecture/design. You tend to be more concerned with the model deleting 1000 lines accidentally.
But then openclaw bursts onto the scene and we dont need to know how to code anymore? Like I give it a task, ask if it has a skill, it says no. It goes to gemini cli, creates its own skill. It's now giving me updates on changes. I didnt do anything.