Monday, December 23, 2024
HomeScienceAI Will Turn Our Lives into The Truman Show

AI Will Turn Our Lives into The Truman Show


Imagine a dystopian not-too-distant future, in which we each inhabit our own AI-driven digital filter bubble, crafted for us alone and designed to serve corporate interests. This future resembles 1998’s The Truman Show where the eponymous protagonist, played by Jim Carrey, has unknowingly lived his entire life within a reality TV show where his every experience is choreographed by a production studio.

One subset of AI, large language models, won’t turn our lives into reality TV shows—no such luck. Instead, personalized AI agents threaten to cage each of us in an individualized and illusory unreality, harvesting our digital dollars and walling us off from genuine connections with other people.

We are well on our way. The October beta rollout of Apple Intelligence may be a watershed moment in our relationship with artificial intelligence: this new release will deliver a highly accessible large language model experience to more than a billion people worldwide. But Apple is just one of many companies, including OpenAI, Google and a host of start-up companies, that are developing individualized, personalized large language models.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The principle behind this so-called personalized alignment is that the AI model will learn about the individual user—what they know and what they don’t, their likes and dislikes, their values and goals, their attention span and preferred forms of media—and adapt accordingly.

The aim is to place a bespoke AI between each user and the vast quantity of information on the Internet, finding the information they want, repackaging it to match their tastes and background knowledge, and delivering it to their screen. Should this project succeed, our ability to make collective sense of the world will be further fractured. We will no longer inhabit one of several competing filter bubbles; each of us will be in our own private filter bubble.

That’s in a best-case scenario, where the systems are designed solely for the benefit of the users. But of course, they are unlikely to remain benign. As with everything on the Internet, they will become “enshittified,” levered by the tech industry to separate us from our money while hoarding our attention.

Consider an American tradition like college football. Are you a superfan of the Ohio State Buckeyes? Do you spend an inordinate amount of time clicking on Ohio State football stories, purchasing Ohio State merch, subscribing to Ohio State videos, podcasts and newsfeeds? You’ll be fed this sort of information on all your devices, 24 hours a day. Some algorithms will even learn your daily schedule and respond accordingly, pushing information at you during precisely those times when you’re most likely to be looking.

Football rivalries aside, this may sound harmless (albeit boring); in many ways, this already describes our online experience. User-tracking algorithms at Facebook, X, Instagram, Google and beyond already track our interests and habits to choose what fills our screens. But the next step unleashes large language models to generate memes and even whole articles, tailored to each of us and our interests, designed to do nothing more than keep us engaged with the sort of content, such as ads disguised as information, that will increase our odds of making a purchase.

LLMs will create fully written articles about your favorite college football team, their recruiting process, and prospects for the coming season. You’ll listen to AI-generated podcasts that sound like sports talk radio. And you’ll be fed conspiracy theories about a rival football team: how they engage in recruiting violations, how they’ve cheated, how members of their coaching staff are tied to a cocaine empire, for instance.

This is a miserable reality for at least two reasons; for one, there are neither computational methods or ethical incentives in place to ensure that the information you receive is true. The goal of the enterprise, of course, is not to depict reality. LLMs will generate what, in their technical jargon, philosophers call “bullshit.” They are designed to sound plausible and authoritative, not to be factually accurate.

But just as frightening as the blatant lack of regard for the truth is an even scarier element. Our hypothetical Ohio State football fan will no longer live with an accurate understanding of college football that is fully compatible with anyone else’s—not even other Ohio State football fans. This fan will run on information generated only for themself. LLMs are already so efficient that reusing content is unnecessary—why generate the same article for two different people, when LLMs can just as easily create two articles specifically tailored to each? This vision is unsettling, even when talking about sports and entertainment. But what of institutions that have more direct social consequences? Religion? Education? Politics?

Commentators across the political spectrum bemoan the fall of the press and the polarization of everything. Conversations around the holiday table have already become impossible for many extended families.

Bad as the status quo might be, stranger times lie ahead that may make us long for today’s echo chambers. Soon, our bubbles will shrink further and further, until our digital worlds involve only ourselves. In an AI-mediated future, every one of us will live in a private Truman Show. As a society, we will be utterly incapable of making fruitful collective decisions because we will have no shared understanding of the world.

What recourse is there? For starters, remember the advice from our parents, and their parents before them, going back at least to the widespread adoption of television: go outside and play. Stop staring at that screen. Hang with your friends, in person. Find your entertainment in spaces with actual people, exchanging thoughts and creations with each other.

Even online, we must keep our understanding of the world grounded in human-authored documents and artifacts. Valuing what humans create is not merely a matter of authenticity; it also ensures that we focus on arguments that an author cared enough to make, on conservations that speakers cared enough to have.

Otherwise The Truman Show’s premise becomes our reality, unknowingly inhabiting a phony world where our every experience is curated for profit. Even more existentially alienating? Living in a Truman Show where the director, producer and the only one watching is an AI.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.



Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular

Recent Comments