We do this quick screen after a 30-min behavioral interview to make sure that candidates can generally operate at the skill level they claim on their resumes.
In the past, we've been shocked by the number of people who will talk a big game, but have really rudimentary programming skills when the rubber meets the road.
The questions are:
1. FizzBuzz
2. Generate the first 20 rows of Pascal's Triangle
3. Drop all non-prime integers from a pre-defined set of 2-to-N
The first candidate we didn't totally suspect, until the second candidate provided nearly letter-for-letter the same answers (same variable names, function names, etc.).
After the interviews, we popped our deck into ChatGPT + Claude and it output exactly what these two candidates had provided.
Last week, a third candidate sent us clearly ChatGPT'd code as an example of some of his work.
I'm unsure what to do here, so I come to you HN to ask, what you have done to guard against the use of LLMs in remote technical interviews? Thanks!
Bonus: The nail in the coffin was when the second candidate immediately clocked the last question as leveraging the Sieve of Eratosthenes. Previously, he'd shown us a pretty impressive portfolio. When asked how he knew the Sieve of Eratosthenes off the top of his head, he claimed he had used it in one of his commercial portfolio projects but couldn't explain how.
So, remembering that I not really a fan when I ask this... but why do you care if a candidate uses an LLM or Google as part of your interview? Do you care if they use an IDE or if they use a code completion plugin? In the end, do you not really want to evaluate if the candidate can produce good clean code?
If you feel like an LLM is too big a crutch, is that because what you wanted to test was memorization of a framework or a test of thought and workflow strategies?
To quote a resource I'm also not keen on but understand why it exists, does your concern about chatGPT during interviews actually point out an XY problem?
It seems vanishingly unlikely that this type of question can provide any signal any more outside an in person interview. The incentives are just too strong for candidates and the tools are too good.
What we are doing is to ask the candidate to have hands visible to the camera for the interview. But some systems are working with voice only and this will not be working in those cases.
Probably the best way would be to have the ChatGPT answer beforehand and confront it with what the candidate is saying?
All you have to do for this hard Leetcode puzzle is to ask the candidate to complete it in Rust.
ChatGPT will struggle to help the candidate as it generates garbage.
After they have completed it then for the second technical interview, question the candidate around how they came up with the solution step by step to show if they really understand both the language and algorithm used to solve the puzzle.
This rigorously filters out 95% of frauds and impostors whilst targeting the best and brightest (really).
Job done.