Education for some reason has always been about bullshit busy work anyway. Homework is for people who don’t understand the material and need more practice imo and should be optional and worth 0-5% of the grade.
Have midterms finals and projects count more, and the tests would probably be monitored with proctors anyway right?
I went to a mediocre college because in high school I would get F for homework. 95-100 on midterms and finals. 5s on all AP tests and almost perfect on SAT. artificial gate keeping based on a GPA based on BS work because even if I got the material and ace the tests the best I can get is like a B.
And I did “bad” in college too where I failed homework and scored 95-100 on exams. If you’re gonna impose busy work on me I’m not doing it if I don’t feel it’s necessary for me. And I shouldn’t be punished for it. And neither should these students if they are genuine about learning. I’m there to learn by choice.
Education needs to evolve. It keeps some people down for no reason.
Other than that, students who don't understand the material well being able to outperform others who do may seem unfair, but in the end your course probably exists to teach students programming fundamentals that are expected as prerequisites in other courses, and the exercises serve to prepare students for the more difficult problems they'll encounter in those other courses.
If a student decides they don't need that practice and ends up stumped later when GPT can't help them anymore, that's their own bad decision to deal with. (On the other hand, if that never happens maybe the Python course wasn't all that necessary for them to begin with.)
So I recommend optimizing mostly for the students who do want to learn something; put enough effort into anti-cheating measures to keep honest people honest, but don't worry too much about cheaters beyond that.
For OP, we've also been using Zoom-based proctoring options for students/courses that can't use the CBTF for some reason. I'd be happy to follow up with you if you want more details about how we make this work at scale.
If the solution isn't short (more than 200 lines), ChatGPT and Copilot can help but cannot solve the problem completely. The more complex the project, the harder it will be for someone to generate it whole cloth with an AI, and even if someone is using an AI to help they'll still need to understand the concepts well enough in order to get good results.
Another good option would be to ask for pseudocode, drawings, flowcharts instead of Python. I don't have access to see if Copilot can pseudocode things, but I don't think the mechanical task of literally writing code is the best way to test this sort of thing anymore or perhaps, it's a bad target/metric.
Switching the course to some obscure graphical programming language might be another good option as well.
Have them commit in a way that shows their understanding at each point as they incrementally improve or solve the problem.
Have them split into groups and submit MR's to each other.
Give them them ability to tag other.
Have them upload their MR to a system with CI and check that it passes tests, lint, style etc.
If the commits are broadly similar then likely they did the same process as most of the class, if the commits are much different then check their understanding and/or cheating.
Obviously, this only makes sense if your students are CS majors. If they are not, you can always teach a light version of this in the spirit of HtDP or https://dcic-world.org.
Just raise the bar. Give them a lot of code and ask them to hack a feature into it. Chain together multiple difficult programming questions to the point that they have to use GPT to get the solution.