AI can’t Lie
It could be a poem.
Yesterday a friend sent me a short text and a question about whether “my” writer had written it. He was asking about a writer I admire a lot — wrote in a genre all his own for almost half a century. I’ve translated his work, and in better moments actually read it closely — which is probably more demanding. Anyway, I read this text through, and despite the sprinkling of familiar terms, the answer was “no”. Regrettably, I succumbed to some weird sense of obligation, something like the “due diligence” of a decent person and half-trained academic: I asked my chatbot.
My chatbot said “yes” (another poem!). It gave a familiar publication title as the source, declining to mention any page numbers. And I looked at the source, even though I already knew better. I fell for it!
Of course “it” wasn’t lying. It can’t. It’s a bot. It can’t hallucinate, it can’t dance. It does what it’s told to do. And it will make stuff up if it’s been told to do it.
A confession: I have been pouring scorn on those submissive souls who treat a chatbot as a friend, a confidant — invest their trust in it and suffer terrible disappointments as a result. They let this astonishing technology beat them up. I herewith offer my apologies for having been unsympathetic — briefly, I hope — but also a stern warning based on experience: don’t mistake your own disappointments for lies. Disappointments are HUMAN, and so are lies, however logical they are and however authoritative they may sound. Don’t forget that in the chat, you are the human, and don’t forget to cherish that. It’s a wonderful thing!
.