molly's guide to cyberpunk gardening

please stop anthropomorphizing generative AI (yes I know it's hard)

A lot of unspoken assumptions underlie the whole "AI thing" right now. But I think the biggest is that LLMs think like we do.

We jump to this conclusion almost immediately, because LLMs generate plausible-reading text. The LLM sounds like a person, so it must think like a person, somehow, right? We then end up anthropomorphizing the LLM, sometimes without even realizing we're doing it.

I just ran across this example from the Distributed AI Research Institute (DAIR) in, of all things, a zine about *how to think and behave critically in the face of generative AI*:

"These large language models are more concerned with sounding accurate than being accurate. You might have heard the term "hallucinate," which basically means that AI makes things up all the time to sound right. You know that person who is constantly trying to one up you? They have been to a better beach, or buy a more ethical tote bag or know a bigger comedian. An AI wants to prove to you that it knows it's [sic] shit. It would rather lie than admit it cannot generate something useful."

DAIR: AI-Z: conversations about resistance and generative AI (zine)

I genuinely appreciate this zine and the contributions of all involved. Yet I'd like to extend one recommendation: If we're going to think critically about AI, stop anthropomorphizing it.

I get that it's easy to do. I definitely understand how it's easier for audiences unfamiliar with AI to understand "an AI wants to prove you to that it knows its shit" than to grasp how its training and predictive capabilities work - especially when companies keep those details locked down harder than the gold at Fort Knox.

What concerns me is that using humanlike terms to describe LLMs exacerbates many of the problems with LLMs - like assuming they can be our friends or replace therapists/teachers/professionals of all stripes. Anthropomorphizing the LLM also obscures the people *actually responsible* for developing these models and thrusting them upon us.

Consider:

"[LLM]s are more concerned with sounding accurate...." The LLM isn't "concerned" with anything. Its creators are concerned that you think the LLM is reliable. This disincentivizes them to program an LLM that produces output like "I don't know" or "this model's training set does not include that data" or "input not recognized."

Programming the model to signal the user when it cannot generate a response based on existing data would immediately demonstrate the limits of these models. Right now, journalists, managers, and a whole lot of other people (alarmingly, my students among them) accept uncritically that LLMs are somehow "smarter" than any of the tools that preceded them. If we got a "this model's training set cannot infer a reliable answer from its given data" back every time an LLM currently hallucinates, very few of us would be using them for long.

(That's assuming such a setting is even possible. Hallucination seems baked into the current design of all LLMs.)

"They have no accountability and simply lie all the time." Again: an LLM is basically a big probability algorithm. It cannot be held to account. Similarly, it cannot lie because it cannot form intent. LLMs just extrude whatever word or phrase seems most probable next; they don't read or comprehend their own outputs, and thus they cannot "know" whether they're "lying" or not.

You know who CAN lie and who CAN be held to account? The people who build these models and force them into every corner of our lives whether we want them or not.

"An AI wants to prove to you that it knows its shit." Again: no, the AI cannot "want" anything, any more than your toaster can "want" to toast your bread. There are, however, trillions of dollars now riding on a handful of hubris-infected techbros proving to your boss that an AI can replace you.

I know how easy it is to anthropomorphize an LLM. I've done it myself. These are terms we understand, and they feel natural - the closest thing to what we're actually seeing. Yet they end up reinforcing the problems with LLMs in daily life. The biggest problem they reinforce is to obscure the actions of the *human beings* responsible for these models - and our available options to hold those humans accountable.

--

tip jar
email
home