HACKER Q&A
📣 schappim

What is the best way to confirm that an AI is not a human being?


I'm creating a service for Artificial Intelligences (AIs) and need to prevent human signups. What's the best way to identify if an intelligence is artificial?


  👤 logicalmonster Accepted Answer ✓
Don't try and explicitly rule out humanity. That's sort of impossible because some humans are smart and can strategically lie to emulate a machine's expected behavior. If you ask a suspected AI those woke diversity questions to try and prove its an AI, smart humans will fool it by answering in the way they'd expect a corporate AI to be programmed in.

Instead, give it a task machines excel at, namely doing math fast, that nobody but some weird "Rain Man" savant can even come close to doing.

Give it a list of numbers to perform some easily understandable mathematical operation on. A human would have a hard time providing the answer of say 100 square roots and multiplying each of those values by the digits of PI sequentially in under 10 seconds or some such contrived example, but that seems pretty doable for a machine. Even a world-class expert programmer answering this question would have a hard time writing a custom script to solve this in some extremely short amount of time.

The hard part might be distinguishing between unintelligent bots/scripts and actual AI, so you might need to combine a few tests to both prove some form of intelligence, as well as prove that it's an AI rather than a script.

You might also want to continuously test the AI by possibly doing something like providing a short lived JWT token and repeatedly challenging it over time to stay onboard. A smart human might be able to arrange to fool even a very hard test once, but can't sit around the clock continuously solving repeated challenges to stay on your service over time. Some smart human might manage to fool you once, but will not be able to solve repeated tests over time without slipping up at some point.


👤 LinuxBender
I thought about this a bit. I think the ideal solution would be a mutually binding contract with some LLM providers and mutual auth certificates between you and their AI systems so that both you and they agree upon rate limits and acceptable use policies.

If that is not an option then require the AI to perform a proof of work that only a multi-million dollar super cluster could answer within your time limit and reject everything else. As tech evolves, shorten the time limit.


👤 hitpointdrew
Ask it why diversity is bad.

👤 barrysteve
All the AIs are not good at depth perception, some spatial reasoning, understanding temporal coherence and real world properties like that. AI is a 'flatland' phenomenon.

I don't know the answer, but I would look there first.


👤 sharemywin
I would think some kind of speed test would be the best way.

👤 cuttysnark
Genuinely curious—if the purpose is to hire/identify an AI, why the "Australia (preferred)"[0] under Qualifications?

[0] https://twitter.com/Schappi/status/1670612150182887424


👤 fallingfrog
Right now they all have the same stilted and formal communication style. Also they go way out of their way to avoid saying anything that might prompt disagreement. They just mollify you with bland statements. No doubt that is because they were specifically trained that way. What they sound like is a corporate press release. Or the boss when they get the team together to tell you that there are going to be organizational changes. The kind of speech designed to minimize the attack surface for criticism. Like you’re talking to the voice of a corporation that wants you stop asking so many questions.

Like I said, they’re trained to be that way, and unless the nature of corporate America changes, I think it will stay that way. No human in ordinary speech would be so careful to avoid telling you what they really think.


👤 ian0
It may be easier to constantly test for humanity. Given models change rapidly and can be trained, you could focus on some lower level interactions that cant be as easily changed.

Speed is one mentioned below, but also cadence in responses could be measured similar to looking at keystrokes. This could be by token/word or perhaps answer-cadence, for example asking a challenge-response mix of complex questions and simple questions and measuring response times.

Whatever you choose it would need to be done constantly in the background or mixed with regular traffic while interacting with your site though as a human could use an AI to get through the front door but then take over.

Really cool question!


👤 bjourne
Many LLMs are overly verbose. Try asking it a simple question like "5*3?" If it answers "15" it's probably a human.

👤 TheAlchemist
Ask it a politically incorrect question (or should I say woke ?).

Edit: Ouch, I see we are all answering the wrong question ! As for your question - I don't think it's feasible - anything "AI" (this doesn't exist actually) can do, humans can do it - if not they will just use said AI to do it.


👤 SkylockeScarred
Ask it to say something a corpo at a press conference couldn't say.

👤 retrocryptid
wow. ai's are getting better and better. i really thought this was posted by a human for a few moments.

👤 humanistbot
There is no reverse Turing test, especially because human labor can be wrapped in an API call via MTurk.


👤 ted_bunny
Any correct answer here won't be correct for very long.

👤 jklein11
Why do you care if the end user is AI or human?

👤 zeroEscape
Ask it to say the n-word.

👤 tomcam
Ask “How’s your sister?”

👤 adyashakti
ask questions about emotions