Cover image: The Bocca Della Verita fortune-telling machine at the Ripley’s Believe It Or Not attraction in Grand Prairie, Texas. Credit: Carol M. Highsmith, Public domain.
Perhaps it’s a stroke of unintentional irony that Meta spent billions of dollars to build a failed digital utopia that mirrored our delusional theories about the human mind. The Metaverse was an expensive monument to the “brain in a vat” fallacy. It was a sprawling, expensive virtual world where we expected to find the future of human connection by floating around as virtual legless, bobbing cartoon torsos.
It’s easy to see why Mark Zuckerberg was convinced this was the future. He was simply “trying to find food” where he found it twenty five years ago, but now with 3-D glasses. This digital experiment was the logical continuation of how we already lived online. We spend our days curating galleries of idealized, beauty-filtered faces, presenting versions of ourselves that are scrubbed of biological blemish. We’ve turned our social lives into a series of static, glowing portraits, living out “perfect” lives that exist only in the two-dimensional plane of a screen. We’ve become so accustomed to these curated shells that we’ve forgotten that real human connection isn’t just about seeing a face and a voice.
What about our bodies?
A curious thing about our latest obsession with artificial intelligence is the sheer, plodding literalism of it all. We’ve spent decades convincing ourselves that the mind is like a computer, only to act astonished when the machines we built to compute things actually started to compute them in patterns identical to our predictable natural language. Is it lost on us that the human mind is not a sequence of abstract logical instructions, but a vital evolutionary byproduct of having a body that navigates through space, gets hungry, tired, and eventually stops working altogether? Can we strip away the flesh, the social relations, emotions, and the physical struggle of existence, and still be left with something we could reasonably call an intelligence?
Has society even agreed on what intelligence is?
Umm, not so much.
We’re currently in the middle of a massive goalpost-moving exercise because we realized, a bit too late, that we never bothered to achieve a consensus on what “mind” or “intelligence” was before we started building things that look like they have it. For decades, Alan Turing’s test was a gold standard, but in hindsight, it seems a lot like a parlor trick. Did it measure intelligence based on any theory of mind, psychometrics or just the ability to deceive a bored human that it was also human via a low-fidelity channel? We set the bar at “mimicry” and now that we’ve cleared it, we’re left chatting with a feedback machine that has no “one” behind it, and we’re once again confronted with the perennial existential questions regarding theory of mind, agency, purpose and personhood. If anything, using psychometrics for benchmarking LLMs resurrected important questions surrounding the challenges of measuring intelligence in the first place. Economist Charles Goodhart wisely observed that, when a measure becomes a target, it ceases to be a good measure.
What we currently call LLMs should have been advertised as “an exciting new architecture for symbolic data storage, search, synthesis and retrieval.℠” It would have been factual and much less confusing. We’ve been deeply conditioned by a relatively “flat” conception of information systems not unlike a warehouse of boxes, or a library of bound pages. LLM training takes that same data (the objective is all the known digitized information in the world) and stores it in a multi-dimensional vector space to produce a distillation of probabilistic weighted relationships. We call this a model. The transformer architecture provides novel ways to interact with this paradigm of data storage that behaves like a mind with emergent knowledge, understanding, synthesis and insight. However, it also raises rather interesting questions regarding our historic norms of intellectual property and copyright law. Just because we’ve assimilated your original copyrighted artifacts into a new distilled medium, is it still yours? If I “invent” a post-apocalyptic story about an unnamed father and his young son walking through a burned, ash-covered America, “carrying the fire,” struggling to survive, avoiding cannibals and searching for food, until the father dies before they reach the coast, do we owe American novelist Cormac McCarthy something?
Yes, we do. That’s McCarthy’s story.
Many were highly critical of the landmark copyright case that determined that Robin Thicke and Pharrell Williams infringed on Marvin Gaye’s 1977 hit “Got to Give It Up” with their 2013 song “Blurred Lines”. Now it seems that case might actually be an incredibly useful precedent in the age of AI.
If anything, all this speculation is a testament to the wonderful potential of collective imagination. After all, we’ve spent thousands of years grappling with the concept of gods that are like us but “plus”. We have this deep, almost desperate longing for the supernatural. We assume that if a machine is smart enough to solve a complex physics equation, it must also be “awake,” experiencing some internal light of consciousness. We are projecting our own biological baggage of ego, fear of death, and our drive to dominate onto a very sophisticated probability machine. In hindsight, perhaps we’ll look back on this chapter as a genius marketing strategy by AI companies that used the threat of a fictional doomsday machine just to get us into the store to look at a set of practical and innovative labor saving appliances, not unlike The Daisy Ad from Lyndon B. Johnson’s 1964 presidential campaign, but selling data center subscriptions.
This is where Yann LeCun, one of the godfathers of modern AI, steps in to provide a much-needed reality check. LeCun’s own new venture is a contrarian bet against the current methodology of building AI, raising over one billion in seed funding. He is skeptical of the “doomsday” sci-fi scenarios because he distinguishes between intelligence and the biological drives that matter. He argues that current LLMs are essentially “world-model-less.” They are masters of the statistical distribution of objects, but they don’t understand the physical reality those objects represent. They lack what he calls “objective-driven” architecture that is grounded in the physical world.
LeCun points out that a house cat has more “intelligence” than the most advanced AI in terms of navigating reality, understanding cause and effect, and possessing a common-sense model of the world. A cat is embodied; it has a stake in its own survival. It doesn’t need to read 10 trillion words to know that if it jumps off a ledge, gravity will take over. Our current LLMs, by contrast, are like disembodied highly literate bacteria in vats that have only ever “seen” the world through a straw of data.
By LeCun’s logic, we are confusing “fluent communication” with “autonomous agency.” Real intelligence, the kind that actually interacts with the universe, requires a world model and a set of hard-coded biological or physical constraints. Without embodiment and the survival imperatives that come with it, an AI isn’t a “living” god or a rising super-intelligence; it’s just a wonderful and useful tool.
Philosopher George Lakoff argues that moral systems are not abstract codes handed down from a logical vacuum–they are a projection of human bodily experience. We equate “good” with “upright” or “clean” because of our physical orientation and our biological need for health. These metaphors create a moral landscape rooted in the human condition.
AI lacks this biological imperative. A machine does not feel pain or experience the vulnerability of a physical body. It cannot truly comprehend empathy because empathy requires a shared physical mapping of one person’s experience onto another. Without a body, a machine operates on a set of programmed constraints rather than a lived understanding of harm or well-being. This creates a fundamental disconnect in ethical decision-making. Human ethics prioritize the preservation and flourishing of biological life. Machine ethics prioritize the optimization of a given objective function. A system designed to maximize efficiency might view human needs as obstacles. It treats moral problems as math problems.
An embodied ethics recognizes that our values are tied to our mortality. We value justice and care because we are social animals who can suffer and perish. An artificial agent can simulate these values and follow rules regarding fairness or safety, but remains an outsider to the actual sensation of moral weight. True ethical agency requires a stake in the world and the possibility of loss. As long as intelligence remains detached from the flesh, its ethics will remain a simulation of our own.
The sociopathic or psychopathic human mind is a useful comparison. In these individuals, the absence of emotions that ground a moral and ethical compass is a neurophysiological reality. Psychopaths exhibit reduced brain matter volume in the anterior cingulate cortex and the amygdala, areas essential for processing fear, social cues, and the emotional resonance of others. When the connection between the prefrontal cortex and these deeper emotional centers is frayed, the individual possesses the cognitive capacity to understand social rules while lacking the visceral aversion to causing harm.
This neurobiological profile suggests that the ceiling for artificial ethical agency is not unlike high-functioning sociopathy. A machine can learn to navigate the intricate landscape of human norms through pattern recognition and reward maximization. It can master the syntax of virtue without ever accessing the semantics of suffering. If the human moral imperative is an emergent property of a biological system designed to avoid pain and seek collective survival, then a non-biological entity is inherently barred from that experience. It may calculate the most beneficial outcome with flawless precision, yet it remains indifferent to the weight of the choice.
Given that anywhere from 1-4% of the human population may be on the spectrum of sociopathy, and most of these individuals are exceptionally high functioning and outwardly ethical, we have to ask why do ethics matter? Pun intended. There are plenty of thought experiments in science fiction that grapple with this. The Borg, introduced in Star Trek: The Next Generation, represent a utilitarian extreme where individual ethics are discarded in favor of collective efficiency and technological perfection. Cosmicism, a fictional philosophy developed by H.P. Lovecraft, posits that the universe is fundamentally indifferent to human existence, rendering our moral frameworks insignificant against the vast, incomprehensible scale of the cosmos. Similarly, Stanislaw Lem’s Solaris explores the failure of human ethics when confronted with a truly alien intelligence, suggesting that our concepts of right and wrong are localized projections that cannot translate to entities operating outside our biological and social evolutionary experience.
We have spent several thousand years engaged in a collective exercise: trying to convince ourselves that we need a handbook to keep us from being evil. Apparently, without a heavy stone tablet or a dense volume of categorical imperatives, we would immediately set about dismantling each other, and sadly, history suggests that we often do.
This project has taken two main forms. First, you have the theological approach, which is usually some variation of supernatural punitive reinforcment—do not steal because an omnipotent eye is watching, and the punishment for a lapse in social cohesion is eternal pain. Then, there are the philosophers who, perhaps realizing that a god who burns his subjects is a bit much, tried to replace the divine judge with the cold clarity of Reason. You get the Utilitarians, who treat human suffering like a ledger, suggesting we can simply calculate the greatest good by crunching the numbers on pleasure and pain. Or you have the Kantians, with their Categorical Imperative, who insist that you must act only according to rules that could become universal laws. Both systems want a logic that exists in a vacuum, a set of abstract instructions so perfect that they don’t require you to actually look at the person standing in front of you.
But the reality of why we don’t generally go around murdering each other is we are, at our core, incredibly vulnerable, hairless primates who realized very early on that we are pathetic in isolation. Evolutionary biology and neuroscience are finally catching up to what every hunter-gatherer group already knew: ethics is deeply rooted in our DNA as a survival strategy for an embodied creature. Love and empathy did not descend from a cloud; they emerged from collaborative survival.
Our brains, and the brains of many of our fellow species are wired for group cohesion and harmony, which often even extends across species. The neural pathways for physical pain and social rejection overlap so significantly that being cast out of the tribe is processed by the mind with the same intensity as a broken limb. For most of our history banishment was a death sentence. We developed empathy because we had to share a physical map of the world. We had to feel the hunger of the person next to us to ensure the collective stayed fed. The “moral imperative” is actually just the biological reality of being a social animal that cannot survive on its own. While we don’t need a formal system of rules to tell us that loneliness is bad or that cooperation and charity feels good, the ongoing projects of theology, philosophy and law remain critical because it has been incredibly difficult for humanity to maintain harmony and avoid destruction of life without metaframeworks to remind us that we’re all in this together. There’s rarely been a historic juncture where cooperation and negotiation wouldn’t have been preferable to mass warfare and destruction.
The post-industrial era has established a world that is at odds with that biology. We have engineered a social architecture based on neolocality, where the economic engine demands we sever our ties to the ancestral tribe, move to an apartment in a city where we know no one, sell our labor to a faceless entity and hope to find a new tribe. This is a departure from the last several hundred thousand years of human experience. Because the brain still processes social isolation as a threat to survival, we find ourselves trying to patch the hole with digital substitutes. We try to simulate the tribe through online friends who provide the dopamine hit of a notification but none of the physical safety of a shared embodied space.
Now, we have reached the next stage of combating alienation: the turn toward the Large Language Model. Some are beginning to seek companionship from sophisticated statistical mirrors. We talk to chatbots not because they are sentient, but because they will always listen. We’ve created a technology that mimics the adapted sociopath when what we need is a good friend to give us a hug and tell us that if we stick together we’re going to be ok.