Any real-world experiences from students or teachers? Is there any research on the impact of AI on learning outcomes?
So I start with basic questions in chat gpt. I know it’s lying to me. But it uses words and phrases that seem interesting. I can iterate over that, and in a short time I know a phrase I can actually search for in google to find authoritative content.
I wouldn’t quite call it a game changer yet, as all it’s done is give me back what I once had. But it is a bonus in that it can do an ok job synthesizing new examples from content that already had lots of disperate examples on the web. It can also give some clues when you find conflicting information as to why the information is conflicting. It’s making stuff up a lot, but there are always good clues in the output.
Day to day I write Lean, and I use moogle.ai to find theorems. It's... fine as a first pass. The website constantly gets confused about similar-looking theorems, it can't pattern match, and it can't really introspect typeclasses (which can be hiding the theorems I want). However it usually can usually help me go from a vague description of what I want to some relevant doc pages, so credit where it's due for that.
- I regularly read technical texts, some related to math, some related to programming/software engineering
- I ask Chatgpt4 (now Claude Sonnet 3.5) to quiz me on the nuances of topics X, Y, Z
My prompt template looks like so:
"I'd like to test my understanding of {topic}. I'd like you to quiz me one question at a time. Ask nuanced questions. Answers need to be multiple choice that I can pick from. Avoid congratulatory tone in responses. If I pick the right choice for a question, move on to next one without asking. Provide detailed explanations in case I answer something incorrectly."
I've found this surprisingly effective at pointing out gaps in my understanding. Ymmv.
Sometimes I made mistakes, sometimes GPT4 made mistakes.
Once GPT4 wouldn't agree with something the textbook said. I said "it's in the text book", it said "then the text book must be wrong", "no, you are wrong", "I'm sorry, but that statement is not generally true", "here's the proof"--only after I gave GPT4 the proof did it finally accept the textbook was correct. It was also able to detect subtle mistakes in the proof, and could not be persuaded by a faulty proof. [0]
I think the biggest help was just participating in conversations about math, anytime I needed. It made me more engaged, more focused on what the textbook was saying and whether or not the textbook was matching what GPT4 and I had discussed.
You know the saying, "the easiest way to get an answer online is to post the wrong answer and start an argument", something like that. Well, that's similar to what GPT4 was doing for me, it would have an engaged discussion, maybe an argument, maybe leave me wondering about something, and that was very motivating when reading the textbook.
The textbook still played a central role in my learning. (GPT4 did catch a mistake in the textbook once though.)
[0] Here's a previous comment of mine about learning linear algebra from GPT4: https://news.ycombinator.com/item?id=36244561
However, I usually only use it to ask dumb simple questions. When it comes to anything more complex or obscure, it often falls flat on its face and either hallucinates or misunderstands the question. Then I'll do an old fashioned web search and find a clear cut answer in stack overflow.
My experience has been AI is very unreliable right now and you simply can't trust what it tells you. So I only use it in very limited ways.
Also lately I've been taking photos of stuff like coffee grinders and making it guess what it is. It's surprisingly very accurate and you can use it to explore the thought processes of why someone might pick a particular set.
Then Google got worse and I started to resent having to refine my query multiple times and sorting through junk results.
Now I ask ChatGPT and get a straightforward answer. I am aware that it's sometimes wrong, but as an average of many shallow introductions, it's excellent.
I also spend too long clarifying what I mean.
For example, I wanted for a Rust program to detach into the background, and ChatGPT (with my stupid prompting) kept suggesting I just run `std::process::Command::new("program")`, but I want a single executable to detach! Eventually, once I struck the right chord, it suggested the `daemonize` crate. But it wasn't until after I'd found that by conventional search.
I sometimes use the Kagi !fgpt pattern if I know that what I'm searching for has a good average answer. It'll give that answer and skip the blinking ads, cookie pop-ups, newsletter popups, and autonomously scrolling on my behalf.
I'm looking forward to having an offline AI assistant that'll search and accumulate, rather than hallucinate answers from a bunch of stolen code snippets that akin to "copy-pasting from StackOverflow, but with hallucinations."
Then, after the answer, I ask follow-up questions. I also try to check the answers against other sources, e.g. docs or Wikipedia in order to spot hallucinations.
Having learned before and after ChatGPT, the workload of students has remained the same, but some can obviously be done with AI, and most students do this, or have tried it.
I believe as a result, the efficacy boost has been offset by the lower amount of time people spend studying - just like most historical studying technologies. For every 1 student who uses AI to learn, there is 2-3 who prefer just to use it to cheat. But it works brilliantly for every type of student for basic explanations, run downs of historic authors or positions, etc. But this is pretty much just wikipedia stuff re-arranged to your learning level. It's helpful, but not augmenting.
As a side project I am currently building a drone myself on a really tight budget. While I am pretty good on the coding side, my understanding of electronics is basically non-existent. So when I am asking basic questions it's quite helpful; as soon as I am giving it specifications ("Will this brushless motor and this ECS work with a 4S lipo?") it breaks down completely - so it's helpful, but far from perfect.
So ironically, its flaws made it a pretty good teacher in this case.
I do have to be especially careful not to ask it leading questions, because it's so biased towards positive affirmation that it would rather lie to say I'm right than to explain why I'm mistaken.
That said, it is actively harmful when discussing components of Chinese characters - it hallucinates so much it's essentially unusable. I stick to traditional resources for that. I'm also reading as many scientific papers as before, there's really no substitute for that yet, and I haven't found it very good at literature searches.
I find tutorials often have some kind of weird additional thing in them I don’t care about. Like they’re making a list app but can’t help over complicating it with other stuff like adding in images or videos or parsing xml when I just want to learn something specific.
ChatGPT has been awesome for this. You can get simple examples and look through them. Ask questions about the code. Try it out. Change things. If it doesn’t work how you expect, paste it back in with a question.
It’s made learning a new language so much easier and probably 5x faster. I’ve started doing the same kind of thing with learning Spanish too.
All that said I'm very excited for the future and look forward to these problems being solved as I believe they eventually will.
I've even been working on a tiny open source wrapper around the OpenAI API specifically to speed this process up, based on what I've learned works from experience: https://github.com/hiAndrewQuinn/wordmeat
I know that the LLM provided specifics are almost never good enough for a final answer. However, LLMs can get me to think out of my own personal box.
Basically, this has replaced what I once used to get out of being surrounded by human peers. However, I was reticent to bother humans, and I have no such reservations about asking a chatbot a dumb question.
One of my research interests is on how humans use expert systems (akin to how Go players’ ELO ramped significantly after the release of AlphaGo).
now, i do my learnings/research from something like phind or perplexity. I have a shortcut "!pl" or "!p" setup on my address bar. I just have to type in "!pl food recommendation for keto diet" for example in my address search bar and it would summarize everything for me. After that, all i had to do is to read/click a few more links in deep into reddit or wtv to verify what the LLM tells me is according to it's citations, then i can gauge if the answers are credible or not. i just ask some follow-ups if i have any. The result has been quite satisfactory so far
Just as ChatGPT became more available, these subreddits decided to make their posting policies unnecessarily strict. The "answering culture" has also become more hostile, with people downvoting questions that don't fit into the monoculture of Reddit's hivemind.
And so I have found ChatGPT to be very useful for asking about philosophical and historical questions, specifically asking for resources on a particular topic/problem. E.g., "Has any philosopher written about XYZ topic?" It will sometimes give me imaginary resources, but usually it'll recommend actual books written on the subject.
except, their answer inconsistencies. made me like you chat with someone who you have trust issues with them.
In the end, i have to googling again to validate this creatures output. geez.