A brilliant deep-dive into the subtle psychological manipulation that occurs when interacting with LLMs and other so-called "AI" tools and the parallels inherent with con-artist tricks such as mindreading, mentalism, cold reading, etc. I've yet to find a piece that so adequately sums up my own feelings about that space, and puts into words ideas I've struggled with. I actually shouted "YES!"Â out loud, to myself, several times whilst reading this đ
On the commonalities between ascribing intelligence to LLMs and supernatural powers to psychics:
The intelligence illusion seems to be based on the same mechanism as that of a psychicâs con, often called cold reading. It looks like an accidental automation of the same basic tactic.
The chatbot gives the impression of an intelligence that is specifically engaging with you and your work, but that impression is nothing more than a statistical trick.
All of these are proposed applications of âAIâ systems, but they are also all common psychic scams. Mind reading, police assistance, faith healing, prophecy, and even psychic employee vetting are all right out of the mentalist playbook.
On why so many in the tech industry appear to have fallen for the belief in proto-AGI so completely, and how certain behaviours within AI enthusiasts inadvertently turn them into the exact "marks" that psychics, mentalists, and other con-artists actively try to locate:
Those who are genuine enthusiasts about AGIâthat this field is about to invent a new kind of mindâare likely to be substantially more enthusiastic about using these chatbots than the rest.
âItâs early daysâ means that when the statistically generic nature of the response is spotted, itâs easily dismissed as an âerrorâ.
Anthropomorphising concepts such as using âhallucinationâ as a term help dismiss the fact that statistical responses are completely disconnected from meaning and facts.
On how LLMs and psychics are similar:
They are primed to see the chatbot as a person that is reading their texts and thoughtfully responding to them. But that isnât how language models work. LLMs model the distribution of words and phrases in a language as tokens. Their responses are nothing more than a statistically likely continuation of the prompt.
Already, this is working along the same fundamental principle as the psychicâs con: the LLM isnât âreadingâ your text any more than the psychic is reading your mind. They are giving you statistically plausible responses based on what you say.
On how we got here, likely not through intent, but more through one field (computer science) not really paying attention to the warnings from other fields (psychologists, sociologists, etc.):
In trying to make the LLM sound more human, more confident, and more engaging, but without being able to edit specific details in its output, AI researchers seem to have created a mechanical mentalist.
The field of AI research has a reputation for disregarding the value of other fields, so Iâm certain that this reimplementation of a psychicâs con is entirely accidental. Itâs likely that, being unaware of much of the research in psychology on cognitive biases or how a psychicâs con works, they stumbled into a mechanism and made chatbots that fooled many of the chatbot makers themselves.
On the power of "subjective validation", something which seems to affect everyone, and particularly impacts those who believe themselves to be "smart":
Remember, the effect becomes more powerful when the mark is both intelligent and wants to believe. Subjective validation is based on how our minds work, in general, and is unaffected by your reported IQ.
On the concerns with how we're currently talking about, thinking about, and potentially using LLMs and similar models:
Delegating your decision-making, ranking, assessment, strategising, analysis, or any other form of reasoning to a chatbot becomes the functional equivalent to phoning a psychic for advice.
Iâve come to the conclusion that a language model is almost always the wrong tool for the job.