HACKER Q&A
📣 mhh__

What would the Dunning-Kruger “test” be for your field?


What question (or problem) separates the wheat from the chaff?


  👤 jakevoytko Accepted Answer ✓
In software engineering, I think "unconditional faith in a single technique or technology" is a good sign of overconfidence in one's knowledge or ability. This certainly applied to me when I was younger. Engineers who go all-in on a specific technique will have really inconsistent results: some projects will go amazing because the technique was the best option. Some projects won't go so great because the technique wasn't the best solution.

A good sign of maturity is when they can notice "hey, that didn't go great" and use it to inform their future strategy - "technique X applies when these conditions are true. If one of them are false, I'm not sure what to do, but I should conduct more research (or talk to more people) before starting that project"


👤 kristianc
As a marketer, asking someone to develop a Go To Market strategy definitely separates the wheat from the chaff.

You see who really thinks of marketing in terms of competitive positioning, differentiation, and how to create different messages and content for different parts of the funnel (and how to measure it all), and who just throws a random bunch of tactics at it.


👤 kody
I'll let you know when I no longer feel like the chaff!

It'd probably be similar to beefield's. Present a problem with a clear success condition and see how resourceful they are in solving it. A failure would be solving the wrong problem, making excuses (or "the problem is not solvable"), confidently talking in abstractions that don't concretely pertain to the problem, or failure to ask a question/say "I don't know".

I don't know that there's a technical question or problem I could ask to make a call on whether or not they know their stuff.


👤 jaymicha
Tell me about an opinion you hold on some piece of technology (framework, language, design pattern, etc.), one you have a fair amount of conviction about and hold pretty strongly. Now tell me how you arrived at that opinion and conviction.

👤 MereInterest
In physics, especially anything remotely related to quantum physics, whether somebody focuses on measurable quantities or not. If somebody has a new measurement technique, something that gives better precision than before, great. If somebody has a new theoretical model that applies to cases that previous models didn't, great. If somebody has a new theoretical model that gives different results than current models, that is great because new experiments can determine between them.

If somebody has a new model that gives the exact same predictions as a current model, lauds it as being obviously the truth, and dismisses all other models as falsehood, then it can be dismissed. This applies to all various interpretations of quantum physics, since they all yield the same measurable predictions.


👤 beefield
Don't know about questions, but there are couple of answers I have learned to be serious red flags:

1. Yes, I understand. (with no further elaboration or clarifying questions) 2. It's okay/good/going well (again, with no further elaboration when asking how something is going/set up etc)

First one typically translates to something between "I have a completely unfounded illusion of understanding" and "I have no clue but don't dare to say it"

Second one translates typically to "I have no clue whatsoever"


👤 yesenadam
I used to like asking other jazz musicians I met for the first time, and hadn't heard, Are you good? (asked with a smile sometimes) People who actually are good never say 'yes' or talk like they think they are. It's interesting hearing what people say, anyway.

Otherwise, asking people who their favourite players are is a quick test, when suspicious. I've come across a few "jazz musicians" who can't even name a single jazz musician. Also, in chess, asking who someone's favourite players are is a quick test of whether they're any good.


👤 codingdave
In my software dev experience, it is clear from how they think about blog posts or other articles online.

The people I consider to be great a this will read a variety of content on a topic, think about it, synthesize it with their own experience, go try new things, and decide for themselves what is correct for their own project.

The less skilled will say, "See, this blog post said to do it this way, so that is what we shall do."


👤 mlady
In grappling (Brazilian jiu-jitsu), it is very apparent from one sparring round with a person to get a sense for what they know and get a general idea for their skill/belt level. The exact belt is determined by that person's coach by measuring their skill against their personal potential.

An analogy to software would be to pair program with a person. You both get a feel for each other's strengths and weaknesses while working on a particular problem. I feel that coding interviews try to replicate this, but fall short given time constraints.


👤 Tainnor
A basic maths competence test would consist in trying to find out what the person thinks mathematics actually is. Inexperienced people would mostly think of computations, such as evaluating integrals or computing eigenvalues. People who understand maths know that it chiefly deals with definitions, theorems, proofs, conjectures and problem solving (that last one is sometimes forgotten even by very theoretical mathematicians, but it's key to some areas such as graph theory, combinatorics, etc.).

👤 professionalguy
In data engineering, I think your responses to the 'entity resolution' problem are a good Dunning Kurger style litmus test.

If you don't know, entity resolution is the process of matching unique rows in two or more databases. Are these the same movies? Are these the same person?

Novice DE: Oh easy, just merge on the name.

Intermediate DE: OH GOD NO.

Expert DE: That's complicated, but I have a plan.


👤 cjslep
When people plain-language declare things as "safe" or "unsafe" it's clear they don't have a background in technical risk assessment.

When people call 5G "safe" or "unsafe", it's clear they don't know about radiation effects on human health. The correct answer is "we don't have data".

Edit: The fact that people would rather make fun of me and imagine me as an "non-ionizing radiation tin foil hatter" (I am not, non-ionizing radiation has no negative long-term health effects) I hope is a demonstration of the power of one's own biases and forcing it onto a stranger to fit one's own convenient worldview.


👤 jawns
Dunning-Kruger separating mid- and senior-level devs:

JUNIOR DEV: My code is simple and easy to understand.

MID-LEVEL DEV: My code is subtle, clever, innovative, expressive, hyper-optimized, and ingenious.

SENIOR DEV: My code is simple and easy to understand.

--

Source: John Arundel, https://twitter.com/bitfield/status/1219174978748370945

But take that for what it's worth: https://twitter.com/bitfield/status/1184741088067833856


👤 tpurves
Fintech: Is the big idea they are pitching a way for friends to split a dinner bill.

👤 ben509
Sigh. DK, as commonly cited, is not simply "competence."

It hypothesizes a meta-cognitive bias whereby a person who is incompetent is cognitively incapable of determining their level of competence.

Everyone has biases, and the reason citing DK may come across the wrong way is it tends to suggest that it's just Other People who have those biases.

From this piece[1] arguing that we routinely misinterpret DK:

> I suspect we find this sort of explanation compelling because it appeals to our implicit just-world theories: we’d like to believe that people who obnoxiously proclaim their excellence at X, Y, and Z must really not be so very good at X, Y, and Z at all, and must be (over)compensating for some actual deficiency.

But, generally, if the question of competence comes up, don't reflexively cite DK. It's been around for ages, it's not novel, and it's usually being miscited.

[1]: https://www.talyarkoni.org/blog/2010/07/07/what-the-dunning-...


👤 paulcole
"Do you think you can write your own blog posts?"

Everyone thinks they can but almost nobody actually will. I'm not a particularly good writer, but I can do what most people can't/won't, actually follow through on writing something month after month after month.


👤 wadkar
A deep neural network paper that doesn’t provide reproducible dataset to verify the results, e.g. AlphaZero paper. I would rather study Leela chess than believe AlphaZero. Sure the former is “inspired” by the latter, but at least I can verify it!

👤 mhh__
As a physics student, my test for myself is always to (surprise myself with) deriving a given equation from scratch.

My "long term" goal in terms of scientific and mathematical maturity is to git gud at calculating path integrals in quantum mechanics.


👤 PascLeRasc
One for hardware engineering might be not considering the PCB itself. At higher voltages your double-sided FR-4 will become a capacitor, and at 100Mhz+ speeds you'll have EMI problems from 90-degree traces. It's easy to think that you can go from a perfboard and wires proof-of-concept to a PCB in a day, but once you hit higher than ~12V or 100Mhz it starts to feel like it's haunted.

👤 throwawaypa123
Build vs. buy for tech platforms. 100% build or 100% buy both are dangerously answers from a senior tech person.

👤 johnwheeler
In software development, what are the most important qualities of an engineer?

Chaff: Understands computer science. Knows how to scale. Mentor of junior engineers.

Wheat: Understands people. Knows when to scale. Mentee of junior engineers.


👤 glitchc
When someone takes a position on a topic but is unable to answer questions on why that position was taken. Scientists and academics are often guilty of this when they pontificate on a topic outside their domain of expertise.

👤 downerending
In programming, when someone talks about lines of code they've written as an accomplishment (more is better). For those in the know, they're a liability. Solving the problem is the accomplishment.

More generally, the right answer to almost any question in software is: It depends. (That is, it depends on the usually large set of explicit and implicit, technical and non-technical requirements.)


👤 jerome-jh
"What can you tell me about the halting problem and its consequences".

👤 mcv
You're in a server room walking along the racks, when all of a sudden you look down. You see a server. You flip the server off, Leon. The server is off, and it can't turn back on. Not without your help. But you're not helping. Why is that, Leon?

👤 neltnerb
How do you avoid cross threading? Can you grab me the 50 mil allen wrench?

👤 xwdv
What is the best ORM?

👤 donpott
Related to the concept of a Dunning-Kruger "test": Shibboleths.

https://en.wikipedia.org/wiki/Shibboleth


👤 hoka-one-one
"Where have you seen the Dunning-Kruger effect in action in your work?"

If they start saying something negative about someone without self-reflection, they are exhibiting the Dunning-Kruger effect.


👤 daniel-levin
The nature of Dunning Kruger implies that people who are not qualified to answer this question will try answer it.

👤 austincheney
I have grown really frustrated with software hiring. Candidate selection is incredibly biased. I really get the impression that software is rife with Dunning-Kruger type people who absolutely cannot write simple code to solve tiny problems and that they selectively bias candidate selection to pick people for selfish reasons that compliment their own position on a team.

Here is how I would screen developers for a general developer position. I would issue a 1 hour limit and ensure the candidate knows the test is graded by a computer only. The goal is can they read instructions and write simple code. I don't care how they write the code. I only care that they can.

Any question could be as complex or challenging in requirements as necessary, but the answer would always be a small function of few parts. The idea being to test reading comprehension, the ability to follow instructions, and write a simple function. An example of output format and data type would be explicitly stated with each question.

An example question:

A customer is spending cash to purchase a drink. Write a function that receives cash as the first argument and the cost of the drink as the second argument and outputs an object indicating the change in coins with preference to the largest denominations first.

1. Specify the grading criteria of the test. 10 points for each question correctly answered within the given time period. There is no penalty for answering a question incorrectly. The cumulative total execution time of all answered questions will be multiplied by 100 if in milliseconds or by 100,000 if in nanoseconds and be deducted from the final score. The idea is to solve as many questions as possible, but in the event that there are a limited number of positions and tied scores slow code is the tie-breaker that disqualifies a candidate.

2. Stress that code style is irrelevant. The test will be graded by a computer compiling the code and executing it again several scenarios in an answer bank. A human will not review the answers submitted.

3. Tell candidates that once the 1 hour test period has exhausted they will have an optional 15 extra minutes to review all prior attempted questions.

4. Force the candidate to perform 3 practice questions before the test timer starts to familiarize the candidate the expectations of the test platform and the appropriateness of answers.

5. Randomly pull questions from a pool of 200+ questions 1 at a time to ensure a developer is focused on the question presented instead of selectively gaming the question list.

6. Ask the candidate to write a function in a designated space that addresses the problem question/statement.

7. Allow the user to test their answer to review the output before submitting it for the next question.

8. Also allow the user to skip to the next random question with a 2 minute time penalty.

That is for a developer. For an architect I would have them write an essay on a prompt provided by the business. Architects review business needs, distill requirements, and communicate goals to provide a platform that executes the business needs while minimizing complexity as much as possible. If they can do that in writing they can do it in software. The most important skill is their ability to communicate clearly against competing factors. Have 3 separate non-developers read and grade the essay.