A good sign of maturity is when they can notice "hey, that didn't go great" and use it to inform their future strategy - "technique X applies when these conditions are true. If one of them are false, I'm not sure what to do, but I should conduct more research (or talk to more people) before starting that project"
You see who really thinks of marketing in terms of competitive positioning, differentiation, and how to create different messages and content for different parts of the funnel (and how to measure it all), and who just throws a random bunch of tactics at it.
It'd probably be similar to beefield's. Present a problem with a clear success condition and see how resourceful they are in solving it. A failure would be solving the wrong problem, making excuses (or "the problem is not solvable"), confidently talking in abstractions that don't concretely pertain to the problem, or failure to ask a question/say "I don't know".
I don't know that there's a technical question or problem I could ask to make a call on whether or not they know their stuff.
If somebody has a new model that gives the exact same predictions as a current model, lauds it as being obviously the truth, and dismisses all other models as falsehood, then it can be dismissed. This applies to all various interpretations of quantum physics, since they all yield the same measurable predictions.
1. Yes, I understand. (with no further elaboration or clarifying questions) 2. It's okay/good/going well (again, with no further elaboration when asking how something is going/set up etc)
First one typically translates to something between "I have a completely unfounded illusion of understanding" and "I have no clue but don't dare to say it"
Second one translates typically to "I have no clue whatsoever"
Otherwise, asking people who their favourite players are is a quick test, when suspicious. I've come across a few "jazz musicians" who can't even name a single jazz musician. Also, in chess, asking who someone's favourite players are is a quick test of whether they're any good.
The people I consider to be great a this will read a variety of content on a topic, think about it, synthesize it with their own experience, go try new things, and decide for themselves what is correct for their own project.
The less skilled will say, "See, this blog post said to do it this way, so that is what we shall do."
An analogy to software would be to pair program with a person. You both get a feel for each other's strengths and weaknesses while working on a particular problem. I feel that coding interviews try to replicate this, but fall short given time constraints.
If you don't know, entity resolution is the process of matching unique rows in two or more databases. Are these the same movies? Are these the same person?
Novice DE: Oh easy, just merge on the name.
Intermediate DE: OH GOD NO. Expert DE: That's complicated, but I have a plan.
When people call 5G "safe" or "unsafe", it's clear they don't know about radiation effects on human health. The correct answer is "we don't have data".
Edit: The fact that people would rather make fun of me and imagine me as an "non-ionizing radiation tin foil hatter" (I am not, non-ionizing radiation has no negative long-term health effects) I hope is a demonstration of the power of one's own biases and forcing it onto a stranger to fit one's own convenient worldview.
JUNIOR DEV: My code is simple and easy to understand.
MID-LEVEL DEV: My code is subtle, clever, innovative, expressive, hyper-optimized, and ingenious.
SENIOR DEV: My code is simple and easy to understand.
--
Source: John Arundel, https://twitter.com/bitfield/status/1219174978748370945
But take that for what it's worth: https://twitter.com/bitfield/status/1184741088067833856
It hypothesizes a meta-cognitive bias whereby a person who is incompetent is cognitively incapable of determining their level of competence.
Everyone has biases, and the reason citing DK may come across the wrong way is it tends to suggest that it's just Other People who have those biases.
From this piece[1] arguing that we routinely misinterpret DK:
> I suspect we find this sort of explanation compelling because it appeals to our implicit just-world theories: we’d like to believe that people who obnoxiously proclaim their excellence at X, Y, and Z must really not be so very good at X, Y, and Z at all, and must be (over)compensating for some actual deficiency.
But, generally, if the question of competence comes up, don't reflexively cite DK. It's been around for ages, it's not novel, and it's usually being miscited.
[1]: https://www.talyarkoni.org/blog/2010/07/07/what-the-dunning-...
Everyone thinks they can but almost nobody actually will. I'm not a particularly good writer, but I can do what most people can't/won't, actually follow through on writing something month after month after month.
My "long term" goal in terms of scientific and mathematical maturity is to git gud at calculating path integrals in quantum mechanics.
Chaff: Understands computer science. Knows how to scale. Mentor of junior engineers.
Wheat: Understands people. Knows when to scale. Mentee of junior engineers.
More generally, the right answer to almost any question in software is: It depends. (That is, it depends on the usually large set of explicit and implicit, technical and non-technical requirements.)
If they start saying something negative about someone without self-reflection, they are exhibiting the Dunning-Kruger effect.
Here is how I would screen developers for a general developer position. I would issue a 1 hour limit and ensure the candidate knows the test is graded by a computer only. The goal is can they read instructions and write simple code. I don't care how they write the code. I only care that they can.
Any question could be as complex or challenging in requirements as necessary, but the answer would always be a small function of few parts. The idea being to test reading comprehension, the ability to follow instructions, and write a simple function. An example of output format and data type would be explicitly stated with each question.
An example question:
A customer is spending cash to purchase a drink. Write a function that receives cash as the first argument and the cost of the drink as the second argument and outputs an object indicating the change in coins with preference to the largest denominations first.
1. Specify the grading criteria of the test. 10 points for each question correctly answered within the given time period. There is no penalty for answering a question incorrectly. The cumulative total execution time of all answered questions will be multiplied by 100 if in milliseconds or by 100,000 if in nanoseconds and be deducted from the final score. The idea is to solve as many questions as possible, but in the event that there are a limited number of positions and tied scores slow code is the tie-breaker that disqualifies a candidate.
2. Stress that code style is irrelevant. The test will be graded by a computer compiling the code and executing it again several scenarios in an answer bank. A human will not review the answers submitted.
3. Tell candidates that once the 1 hour test period has exhausted they will have an optional 15 extra minutes to review all prior attempted questions.
4. Force the candidate to perform 3 practice questions before the test timer starts to familiarize the candidate the expectations of the test platform and the appropriateness of answers.
5. Randomly pull questions from a pool of 200+ questions 1 at a time to ensure a developer is focused on the question presented instead of selectively gaming the question list.
6. Ask the candidate to write a function in a designated space that addresses the problem question/statement.
7. Allow the user to test their answer to review the output before submitting it for the next question.
8. Also allow the user to skip to the next random question with a 2 minute time penalty.
That is for a developer. For an architect I would have them write an essay on a prompt provided by the business. Architects review business needs, distill requirements, and communicate goals to provide a platform that executes the business needs while minimizing complexity as much as possible. If they can do that in writing they can do it in software. The most important skill is their ability to communicate clearly against competing factors. Have 3 separate non-developers read and grade the essay.