The thing is that LLMs are not moral subjects, they don't feel bad the way you feel or the way a dog or a horse feels when they let somebody down. I worked for a company developing prototypical foundation models circa 2018 and one of the reasons that I didn't invent ChatGPT is that I wouldn't have given a system credit for making lucky guesses.
Every day, the AI boosters have a slot machine to sell you and you fell for it.
"Garbage In, Garbage Out"
"You get out of it what you put into it."
It isn't returning code, because it doesn't know what "code" is. It's returning language, essentially "code-shaped text." It only happens to work as well as it does when it does because the model is trained on examples of existing working code supplied by humans, therefore whatever it returns is likely to at least mostly be correct, at least for common cases where a high probability match is likely.
You can prove this by asking any of the super-smart LLMs that have access to 'current' data (searching, whatever), which US President has bombed the most countries in their two terms. They will claim it was Obama, or that they cannot determine the answer because it's "complicated". The truth is, the USG and it's technocrats instruct and train these bots to lie in support of the state agenda.
These bots will even claim the true answer is misinformation even after you force them to the factually correct answer. Just like the Wizard of Oz, it's just sad little man pulling the strings of a terrifying facade.