So while you can get it to label things and do various kinds of logical evaluation, it's almost impossible to get it to preemptively scrutinize its own output. It will scrutinize its prior output and correctly identify contradictions, mistakes, and irrational inferences...and then it will go right ahead and repeat them.
It's easy to get it to give answers containing two sentences that wholly contradict each other, and subsequently to identify the contradiction and what is illogical about it. But giving it standing orders or negative orders (like 'do not suggest $incorrect_answer') generally doesn't work, so chain-of-reasoning exercises become very frustrating.
No, because it doesn't know what lying is. It doesn't know what the truth value of the text it produces is.
https://help.openai.com/en/articles/6787051-does-chatgpt-rem...