Specifically, we are changing hiring across 3 dimensions: > Tasks: Real-world tasks on code repositories vs standard algorithmic-style puzzles > Evaluation: AI fluency, orchestration skills vs functional correctness > Candidate experience: Agentic IDE vs a simple code editor
In the “old world,” you could ask multiple questions and triangulate skill from answers. Now it seems like evaluation depends heavily on tools and models that keep changing month to month.
So I’m curious: > What signals actually correlate with strong engineers today? > How do you design interviews that don’t become obsolete with the next model release? > Are algorithmic interviews still useful at all?
Would love to hear from people who have recently changed their hiring process or have been interviewed using this new approach.
(1) understanding software engineering (for one thing knowing if answers make sense)
(2) subject matter expertise and the ability to communicate with SMEs, fake being an SME by reading books, see the old "knowledge engineer" construct from the 1980s.
(3) knowing specifics about AI coding.
I think (1) and (2) are 80-90% of what leads to success in the long term. My guess is the models are going to get better so (3) skills have a short half life and will matter less, but (1) and (2) will stay the same.
Maybe I'm cynical but if I was designing screeners for this thing I would ask people things like
"How many accounts do you follow on X about AI?" where the right answer is "I don't have an X account" and the higher the count the worse it is.
"What percent of your programming time do you spend thinking about AI programming tools?" and anything about 20% is suspect (but maybe it is a tooling job or something in which case I'd drop it)
That is, I want to see that somebody used AI tools to deliver something 100% done end-to-end that worked and I'd like to see them spending 80% of their time doing.
I'd also be thinking about screeners designed to detect FOMO attitudes and reject people for it.
Good developer will give good prompts, because knows what to ask, and what might be the problem. Good developer can read the code and point badly generated one, and learn the AI how to perform better, which style =to follow. Good developer can evaluate if the used algorithm is the proper for the task and give suggestions, if needed. Good developer can optimize token usage, by using scripts, for example.
Yes, skills in prompting, knowing about new tools and how to use them is also mandatory, but not the most important one, in my opinion.
I guess better soft/social skill are needed. Some people just can't express themselves in real world, and probably they will have difficulties expressing themselves in free texts as well