The linguistic gap between what AI does and what we say it does is more than a grammatical quirk—it is a semantic sleight of hand that risks shielding corporations from accountability. While the public often perceives AI as a sentient entity that “thinks” or “understands,” a new corpus study reveals a surprising divide between the hype of everyday conversation and the disciplined restraint of professional journalism.
- The Professional Buffer: News writers are far less likely to anthropomorphize AI than the general public, likely due to strict editorial standards like those from the Associated Press.
- The Spectrum of Meaning: Not all “mental verbs” are equal; describing an AI’s “need” for data is a technical requirement, whereas an AI’s “need to understand” implies a cognitive capacity it doesn’t possess.
- The Accountability Gap: Using human-centric language (“AI decided”) creates a dangerous narrative that obscures the human engineers and organizations actually responsible for the system’s output.
The Deep Dive: The Psychology of the “Ghost in the Machine”
The study, published in Technical Communication Quarterly by researchers from Iowa State, Brigham Young, and the University of Northern Colorado, analyzed over 20 billion words from the News on the Web (NOW) corpus. The core issue is anthropomorphism—the attribution of human traits to non-human systems. In the era of Large Language Models (LLMs), this is an uphill battle because these systems are specifically designed to mimic human cadence and tone, triggering a psychological response in users to treat the interface as a peer rather than a tool.
When we say “ChatGPT knows the answer,” we are utilizing a mental shortcut. In reality, the system is executing a probabilistic calculation of the next most likely token in a sequence. It has no beliefs, no intentions, and no consciousness. The researchers found that while “needs” was the most frequent mental verb used with AI (661 instances), it often functioned as a description of a requirement (e.g., “needs data”) rather than a desire. However, the subtle shift toward phrases like “needs to understand the real world” signals a creeping acceptance of AI as a reasoning agent.
This linguistic drift isn’t accidental. From a corporate perspective, framing AI as an autonomous “thinker” is a convenient shield. If an AI “decides” to produce a biased result, the blame is shifted toward an opaque algorithm rather than the humans who curated the training data or set the guardrails.
The Forward Look: From “Knowing” to “Deciding”
As we move from generative AI (chatbots) to agentic AI (systems that can execute tasks across apps and platforms), the language will inevitably shift from verbs of cognition (“knows,” “thinks”) to verbs of agency (“decides,” “acts,” “chooses”). This evolution will create a critical friction point in legal and ethical frameworks.
We should expect a looming “Responsibility Crisis.” If the media and the public fully adopt the narrative that AI is an independent decision-maker, the push for “AI personhood” or limited legal liability for developers will intensify. The findings of this study suggest that while journalists are currently acting as a bulwark against this trend, the sheer volume of everyday anthropomorphic speech may eventually erode these editorial standards.
The next frontier for this research will likely be the impact of these rare but potent anthropomorphic phrases. Even if only 1% of news articles attribute “intent” to AI, those specific narratives often capture the public imagination more than a thousand dry, technical explanations. The battle for how we perceive AI is being fought not in the code, but in the dictionary.
Discover more from Archyworldys
Subscribe to get the latest posts sent to your email.