The more specific the better
In order to become great at that, you have to make enough mistakes that affect other people until you understand what helps and what hinders them. You have to realize that your code isn't just implementation but is human-oriented communication, and tools are part of the puzzle.
The more your work hinders others, and the more you are forced to deal with the consequences of it, the more you will learn what helps and facilitates others.
You can also learn some of this by using and suffering from other people's bad work, as long as you don't allow it to make you complacent.
Finally, you need to have humility. Nobody who thinks they're great is going to be truly great. You need to recognize how you suck and how to suck less. "Take the log out of your own eye so that you can see clearly to take the speck of dust out of your brother's eye."
As others have said, as you gain more experience, your greatest impact will shift from your individual contribution to your ability to grow and guide teams of people.
To be able to do that effectively, you've probably had a variety of experience. You've seen things done well, and also been involved in enough mistakes to understand where the dragons lie. You've been responsible for greenfield delivery, modeling multiple business domains, have worked on codebases that resemble modular monoliths as well as microservices and can confidently explain the potential benefits and pitfalls of each. You've read a lot, kept up with changes in the field. You read other people's code, especially well written open source stuff. You probably contribute to some open source stuff too. You actively solicit feedback on your own code and don't take criticism personally. You've seen testing done well, and testing done badly. You've studied different programming paradigms and languages and become experienced enough in each to be able to make good decisions about where to use them (and where not to). You've worked with multiple different frameworks over 10s of years and can see the differences, but also the commonalities between them. You learn how to decouple your core code as much as possible from things that are subject to change on a whim. Your curiosity over your career means you have picked up adjacent knowledge; business domain knowledge being particularly valuable, but also things like an understanding of the full stack - networking, security, databases, infrastructure - you're as comfortable digging through a wireshark capture or tracing through system code as you are facilitating business/tech workshops discovering bounded contexts.
Eventually you become the goto person, the one who everyone wants to bounce ideas off. The no-nonsense person who confidently knows how to get stuff done. And then you start growing and empowering others and realise that's where the true 10x multiplier comes from.
That's been my experience anyway. YMMV.
As I've worked with more senior engineers who have done this over and over again, I've noticed that the raw programming part isn't any different. I still know all the same things. But they have the most valuable ideas from experience. Their foresight to lean towards a specific implementation because they've seen where this plan falls short in the past ends up being a massive time and money saver.
Dealing with tricky problems has provided the largest growth for me. I worked at a scrappy startup where I wrote a ton of CRUD endpoints and got to do something interesting here and there. You don't learn anything new when you write the same code over and over with some basic stuff in place. I've found that I've learned the most from problems that are intimidating at first.
It's honestly comforting to know I can write similar code to more experienced engineers but I'm lacking their foresight and experience.
I've discovered that the usual buckets used to categorize tests (unit, integration, etc) can mislead people into writing tests that make refactors painful/impossible. If you test each "unit" in isolation, then you limit your ability to change how the units interact (even if the user-facing behavior isn't changing at all.) You end up needing to change/rewrite the tests for each "unit".
Instead of units, I think about supported interfaces and tricky dependencies. Ideally, I'd test using the user-facing interface. That way, the test only changes if the user-facing (supported) behavior changes. Then I swap out any tricky dependencies (mainly slow/nondeterministic operations) for fakes/stubs/mocks. In the end, the tests look like "integration" tests, but they fit into your CI pipeline in the same place unit tests would.
This testing strategy makes me much faster and more effective. I can start writing tests before I've worked out how I'll organize the components. And if I change my mind halfway through, I don't need to rewrite any tests. If done right, those tests can survive years of refactoring with little maintenance.
There's more to good testing and being a great programmer than this, of course. But this is the lesson that has had the biggest impact on my "greatness" at programming.
What does this mean?
Don’t just do: write, share, give others credit, coach, etc
You learn go for that.
Sorry.
I think a great way to improve your codebase is to write unit tests, integration tests and by using languages with more types than say, JavaScript or PHP.
If the language has a lot of conventions for types, you can add satisfiability resolvers (SAT resolvers) and what kind of assumed safety/security comes with that.
Also, do capture the flag events online (e.g. picoCTF is a great start) which teaches you a lot about naive assumptions and programming mistakes and the reasons _why_ you should never use C++ for anything critical, and why memory ownership always fails when humans do not agree on conventions.
Boundless humility, boundless honesty to yourself and your peers, boundless curiosity, and of course a passion to find programming nirvana