I find that I will take a first glance at the code, notice the inclusion of some more advanced concept or technique, and instinctually I'll assume the code is the product of a more seasoned developer. Then, quite often, I'll come across something in the code that doesn't make sense. My initial reaction is to assume the author of this code is using some new technique or approach I'm not familiar with. I'll dig and search and turn myself inside out trying to figure out what I'm missing...only to eventually come to the conclusion that I haven't actually missed anything.
No, the reason the code didn't make sense to me is because the code just...doesn't make sense. It was not the product of some seasoned developer, but rather of a junior developer who has simply made a series of run-of-the-mill junior-developer-mistakes. In the past I would have been able to pick these sorts of mistakes out from a mile away. Now I find I have to see past the patina of pseudo-sophisticated LLM-generated code first before I can start the review process.
I have used some of those tools myself, and for the code that I could use help of an AI tool, I, again and again, receive junk: code that looks plausible but that does not compile, uses apis or libraries that do not exist and so on. In the end, it just made me waste time.
Feels like more of a fight with the AI and less time thinking about the bigger picture of the changes being made (e.g. systems involved, business considerations, documentation, etc..)
AI will create the world‘s largest code mess, while also running the entire world, and everything will be slightly broken like in the movie Brazil.
Just reject and ask for an explanation OR pair with the coder to have them explain the code.
If the reviewer can’t understand it without an explanation the rest of the team can’t understand it unless they git blame then ask the coder, assuming they are still at the company!
The “dump 1000 lines of previously unseen and undiscussed code as a code review at a reviewer” method is an antipattern.
Either the reviewer should be heavily involved or break it into smaller, well explained chunks with design documentation to point to.
This saves everyones time in the long run.
First, let me blame microservices. I heard your eyes roll, but just wait a minute.
Second, there's all these new frameworks that kind of sit on top of a language just to make it practically a new language. Things like pydantic and sqlalchemy, and other things that parse data, check data, and move data around are their own kind of magic with anything more than a trivial data model. You have to learn how the whole modeling works. Then try to apply it to what is going on. In the end, I'm generally not sure the code is any better, and it's certainly less readable than just a function that parses your data. The frameworks are more complex though, and have all sorts of features. Features you need to learn when someone new brings that framework into your code.
But AI's can easily generate boilerplate code to handle things that even the author doesn't understand. This is not good.
So now you've got a "simple" microservice that uses a bunch of libraries and things on top of a language. Such as for python, it's pydantic and mypy and numpy, just all sorts of things that are very good, but hard to understand what is going on from a code review basis unless you are also familiar with these frameworks.
To go back to blaming microservices, now that everyone wants to have a bunch of little containers running around, even a simple ecosystem can use all sorts of languages, frameworks, build systems, etc. You have to learn all of them for every service.
To contrast this with how it used to be in my opinion, one product had one ecosystem. One product would be more monolithic, using fewer moving pieces. Once you learned the pieces, of which there were fewer, you can easily review code.
While it can be that many codebases are smaller by using more off the shelf libraries, that smallness hides problems. Many times people don't understand the deep ways that the frameworks work, and where the problems may be, especially with performance and scaling.
But this small code doesn't make nearly as much sense as even larger amounts of just language code where everything is in front of you to review.
In the end, you can over-engineer things in a very hidden way.
Most young devs have completely NO knowledge of how a system works, no idea about how to deploy and so on, they are "born on someone else APIs", meaning some cloud vendor and they do not even want to know. The results are fragile big monsters so buggy that no one looks at logs, at least not if something do not break.
I'm not really -good- at it but the explanation for false positives has typically been 'someone not up to a task' where it gets to the point I call it as an LLM.
Most experienced developers know that unnecessary complexity is the absolute worst enemy. You literally cannot overstate the harms of unnecessary complexity... It is absolutely and inherently harmful. But to the junior or mid-level developer, complexity is a sign of intelligence; that, along with the ability to churn out thousands of lines of code per day.
On my own projects, I never allow this complexity, but when you're working for a company, they don't like it if you point out that there is a complexity issue. They'll think that maybe you're just not smart enough and are jealous of or trying to demoralize the 'genius junior dev' who is churning out 2k lines per day! Truly Kafkaesque situation.
I honestly didn't know what to do in my last job. I was doing a lot of PR reviews but I just let the 'most productive' junior dev continue adding complexity because that was what the founder wanted me to do. Every time I tried to talk about reducing complexity, I would get brushed off, so I just stopped trying.
It's quite a ridiculous situation actually. Because all the code I write is high quality; highly maintainable, everyone is able to easily add features and make changes to it, but when I work on other people's ugly, over-engineered code, it's a struggle.
So from the outside, it looks like I'm slow when working with other people's code, and it looks like other developers are fast and adaptable since they can easily work with my code... So basically I look like I'm the one who is a low performer.
The winning strategy is clearly to write over-engineered code, then try to socially engineer the situation so that you only end up working on your own code or other people's high quality maintainable code (if there is such a thing at your company because people who produce such code tend to get laid off)... While at the same time, you need to try to ensnare your colleagues to work on your complex code so that they end up looking unproductive relative to you... Because huge amount of code + visible features is how directors decide on promotions and layoffs... It's always about picking low hanging fruits, sprinkling sugar on top and then personally delivering it to the boss on a silver platter; easy and visible.
Much of software engineering nowadays is social engineering; ensuring that you are only assigned to decent quality maintainable and highly visible projects, always hitchhiking on top of the work produced by good developers and dumping your own low-quality output on others to entrap them. Sigh... Then after some time, these big companies end up with ridiculously low productivity expectations... Which is great for social-scheming low performers who are used to this game of racing to the bottom.
Also, people like me who can see what's going on are never promoted to positions where I can have the last say on such things. It feels like the entire tech economy is just a massive bullshit jobs factory at this stage. All about pretending to be highly productive while in fact being counter-productive.
Pro-tip: if your normal writing includes typos and sloppy formatting, then be careful pasting a block of three bullet points where each point is a title-cased bold string, followed by exactly two sentences, with perfect grammar and no typos.