How do teachers reply to such questions?
Are teachers starting to change what they teach?
ChatGPT will often make up an answer which is absolutely incorrect. It will even insist that it’s correct.
It appears that the only responsible way to use ChatGPT is for questions where you already know the answer or can quickly validate the correctness somehow.
People have been having this same discussion about math for many years. A copy of Mathematica can answer just about every question on every math test you will ever take. So what? We didn't stop having to learn math in order to understand anything that's based on it. Mindlessly quoting answers from ChatGPT will not teach you new ways of thinking.
As for the knowledge itself, the key thing to understand is that knowledge inside your brain is more useful in more circumstances than knowledge outside your brain. ChatGPT won't notice connections between unrelated subjects for you. It won't notify you in advance when a question ought to be asked.
I've largely given up on trying to "catch cheaters." At this point, either a student wants to really learn the information or they don't.
What are you gonna do? Cheat your way to a programming degree and then... bomb at some programming job because you don't know how to actually code or think?
I try to set students up for success, w/clear & obvious requirements (e.g. test suites for project grading) and then appeal to their sense of reciprocity: I'm trying to be cool to you, please be cool w/me & don't cheat. If you feel like you need to cheat to pass, come talk to me so we can address the material instead.
Regurgitating was never a great way to measure understanding anyway. In an ideal world, I'd have the time to sit down with each student and have a 10 minute talk, and get a sense of how well they understand the material, like the old english system.
In the long term, assuming AI takes over a significant share of knowledge work, curriculums will need to adapt, just as they adapted to computers and calculators.
Critical evidence-based thinking and the ability to properly apply understanding to solve a problem, and validate or disprove a proposed solution is something that a language model struggles to do, as it can't apply the reasoning process on facts and circumstances and context.
GPT output can be very believable and confident, but I would expect a good student to be able to question and interrogate a position and establish whether it is accurate or not, based on an evaluation of the available evidence.
Education moves slowly as a sector, and I doubt many have changed anything in the last month or two, but the pandemic and a shift to remote exams or "open book" exams has (for good courses and educators) resulted in more of a focus on explanations, justification, application of knowledge to new areas, and critical evaluation. If you could copy it down in a book and repeat it in an open book exam, chances are a language model poses the same threat.
A simple answer would be to have students take handwritten exams in class, with no electronics allowed. This is not always possible (ie online classes). Homework has always been subject to a number of methods that relieve the student from the burden of actually learning the material (ie parents helping, copying other students, searching online, etc).
Online classes are a tougher problem to solve. Maybe the answer for that is requiring that they be paired with proctored exams.
Also, ChatGPT in particular is a terrible "source of truth". It gives plausible sounding answers, but they may not be factually correct.
I just asked ChatGPT to write a press release for a new scientific discovery, which would be impossible to actually happen. It happily did so, making up a name for a new element along with creating a fake quote from a made-up researcher at a made-up National Laboratory. You could do the same for historical events, current institutions, anything.
It dismays me seeing such poor takes everywhere about how they think education will falter and people ask questions like this, it's almost comical.
Perhaps people should start to learn that education is about learning how to learn and applying that to new concepts.
As most (all?) researchers will tell you, asking good questions is the often times the hard part.
A good curriculum, sadly rarely found, will teach you how to think and how to ask good questions. Chatgpt doesn’t really impact this, imho.
Because understanding and looking it up are not the same. We've been able to look up things since we've had libraries and experts.
One one hand, you may end up doing something really stupid/dangerous just because you don't know what the answer means or what are the implications and blindly do what the statistical model tells you.
On the other hand learning gives you "HD vision" for the world.
Anything new you learn will let you see much more detail of the things around you, and you don't really know what you're missing, you really can't know, because until you learn it, you're blind.
Moreover, while ChatGPT is really good, it can make mistakes, and if said mistake isn't an obvious one, then you'd need to have a good general understanding of the thing you're asking a question about to spot that mistake.
We should also consider that ChatGPT won't be free forever, so it might be a bad idea to change teaching in a way that assumes that these kinds of things will always be accessible to students and wherever you're going to work.
I don’t really understand the uptick of “Rate how much the idea of a confident-sounding chat bot has shaken your world view to the core on a scale from 1-10” posts. It’s a language model, not a New Internet.
In most cases, homework / basic questions are to strengthen your knowledge. If you are letting the AI do this for you, are you able to have a conversation about said topic?
Changing what teachers teach will not change the problem. Students will always use this bot now. The cat is out of the bag.
seems like the wrong question