But I must admit that correcting and marking projects that have been mostly written by LLMs is kinda sad, as none of the codebases I saw was interesting to read. They are all just average, following the same patterns and idioms. I also see a lot of basic errors and misunderstandings, indicating that students simply copy-pasted stuff without even trying to understand what they are doing, or generating.
So what's the solution? As those assessments are do-at-home ones, the only thing I can think of for students to get more involved in the project is to make them a lot harder. As all the boilerplate code is now handled by ChatGPT and co, they will have to focus on the core of the project that can't be easily done by LLMs: projects requiring multiple source files, or longer OS-specific code with more constraints.
Grades are not here to evaluate the assessment per-se: grades are here to monitor learning and progress. When the assessments are done without any learning behind, then aren't the grades useless?
What do you think? Am I thinking wrong about it?