Recently, I've been interviewing and consistently encounter LeetCode-style technical assessments. While I understand the need to evaluate technical skills, I find this approach problematic for several reasons:
1. These exercises rarely reflect the actual work. In my experience, real engineering challenges involve system architecture, ML pipeline design, and performance optimization – not inverting binary trees.
2. In my leadership roles across research labs and industry, I've observed that engineers who excel at LeetCode-style problems don't necessarily perform well when faced with complex, real-world challenges that require thinking about system-level implications.
3. These interviews leave little room to discuss what I believe are more relevant topics: ML architecture decisions, HPC optimization strategies, deployment considerations, and how my experience building production systems aligns with the company's technical challenges.
4. I find these exercises painfully boring compared to my day job, where I work on fascinating problems in computer vision, deep learning, and high-performance computing. Spending hours on contrived puzzles feels pointless when I could be demonstrating my ability to solve real engineering challenges through system design discussions or code reviews of actual production systems.
I'm curious about the HN community's thoughts: - For those in hiring positions: How do you evaluate senior/staff level candidates with extensive research and development experience? - For other experienced engineers: How do you handle these situations? Have you found companies that take different approaches? - Has anyone successfully implemented alternative evaluation methods that better assess real-world engineering capabilities?
My goal isn't to dismiss algorithmic knowledge – I've spent years optimizing complex systems and developing novel algorithms. Rather, I question whether LeetCode exercises are the most effective way to evaluate experienced engineers who have demonstrated their abilities through years of shipped code, peer-reviewed research, and complex production systems.
My assumption is that these tests are lower effort ways to effectively weed out a large portion of applicants.
I don’t know what the point of these quizzes are now. Actually I never knew what the point was. At least they are amusing now.
My team's interview process has a couple algorithm questions but they're fairly basic (to ensure you can still code), the main portion is about API design and system architecture. It's not very objective, but I like it since it is directly relevant to our work developing middleware systems and navigating competing interests in the process.
I also have 20+ years of experience in software development.
My skills and experience are not available to any companies using Leetcode style interviews.