This article is more than
1 year oldIn the fall of 2016—the year in which the proportion of online adults using social media reached eighty per cent—I published an Op-Ed in the Times that questioned the popular conception that you need to cultivate a strong social-media brand to succeed in the job market. “I think this behavior is misguided,” I wrote. “In a capitalist economy, the market rewards things that are rare and valuable. Social media use is decidedly not rare or valuable.” I suggested that knowledge workers instead spend time developing useful skills, with the goal of distinguishing themselves in their chosen fields. The article took off, driven by a provocative title added by my editors: “Quit Social Media. Your Career May Depend on It.” For a brief period, it even topped the paper’s most-e-mailed list.
The backlash soon followed. The CBC invited me on a national radio program to discuss the essay; I was surprised, just a couple of minutes into the segment, when the host introduced two unannounced guests tasked with rebutting my ideas. A prominent communications professor began e-mailing me invitations to debate. Most notably, the Times took the unusual step of publishing a response Op-Ed two weeks later. It was titled “Don’t Quit Social Media. Put It to Work for Your Career Instead,” and it was written by Patrick Gillooly, the director of digital communications and social media at the career site Monster. Gillooly provided a point-by-point disputation of my column, delivered in a tone of unabashed techno-optimism. “In the case of clients in particular,” he wrote, “exposing yourself to diverse views expressed on social media will make it easier to find common ground.”
My essay stood out partly because technology criticism during this period tended to pull its punches. Respected writers and researchers, including Baratunde Thurston and danah boyd, had popularized the idea of taking breaks from Internet use, indicating a growing recognition of the costs of online life—but breaks were about as far as most people were willing to go. A notable exception was the technologist Jaron Lanier, whose 2010 manifesto, “You Are Not a Gadget,” delivered a scathing indictment of the consolidation of Internet activity onto a small number of corporate-owned platforms, such as Facebook and Twitter. The book was widely praised for its originality and cheerful techno-hippie vibes, but many readers rejected its most radical assertion: that the shift toward a Web 2.0 world, in which posting information was as easy as consuming it, might have been a mistake. A review of “You Are Not a Gadget” in the Guardian noted that “Lanier is clearly very well-informed about IT,” but then went on to describe the “social and spiritual strand of the book” as sounding like the “anxiety of an ageing innovator.” The implication was clear: don’t take this guy too seriously.
In the moment, the response to my criticism, as well as to thinkers like Lanier, felt like a cultural immune reaction. The idea of stepping away altogether from powerful new tools like social media just wasn’t acceptable; readers needed to be assured that such advice could be safely ignored. The communications theorist Neil Postman, who died in 2003, probably wouldn’t have been surprised by this reaction. Though he is best known for his anti-television polemic, “Amusing Ourselves to Death,” Postman’s masterwork is his 1992 book, “Technopoly: The Surrender of Culture to Technology,” in which he argues that our relationship with technology has passed through three distinct phases. First, tool-using cultures deployed inventions to “solve specific and urgent problems of physical life” (using a water wheel to mill grain faster, say). Later, the rise of industrial capitalism pushed us toward “technocracy”—an era in which technology made a bid to become more important than the values that had previously structured existence. Rural communities were gutted, inequality was amplified, spirituality was sidelined, and the landscape was crisscrossed with telegraphs and railroads—all in the service of newer, more capable tools and the relentless demands of efficiency.
During this period, Postman writes, “tradition, social mores, myth, politics, ritual, and religion have to fight for their lives.” And fight they did. William Blake, John Ruskin, and Henry David Thoreau, among many others, decried the deprivations of industrialization, while the Luddite movement protested automation and the concentration of profits. Mark Twain expressed an almost boyish enthusiasm for the new industrial cotton mills in his memoir, “Life on the Mississippi,” but also wrote “Huckleberry Finn,” which Postman describes as “nothing less than a celebration of the enduring spirituality of pretechnological man.”
The big surprise in Postman’s book is that, according to him, we no longer live in a technocratic era. We now inhabit what he calls technopoly. In this third technological age, Postman argues, the fight between invention and traditional values has been resolved, with the former emerging as the clear winner. The result is the “submission of all forms of cultural life to the sovereignty of technique and technology.” Innovation and increased efficiency become the unchallenged mechanisms of progress, while any doubts about the imperative to accommodate the shiny and new are marginalized. “Technopoly eliminates alternatives to itself in precisely the way Aldous Huxley outlined in Brave New World,” Postman writes. “It does not make them illegal. It does not make them immoral. It does not even make them unpopular. It makes them invisible and therefore irrelevant.” Technopoly, he concludes, “is totalitarian technocracy.”
Postman’s book, which I was reading around the time I wrote my Times Op-Ed, helped me make sense of the urgent reaction my ideas sparked. In a technopoly, the notion that we might abandon a new tool like social media wasn’t something to consider or discuss—instead, it was something to be rendered invisible and irrelevant. The use of the phrase “quit social media” in a headline of a prominent publication was like a temporary glitch in the matrix that needed to be rapidly patched, then explained away. What I didn’t realize back in 2016, however, was that, although the grip of technopoly was strong, it was also soon to weaken. The first cracks in its foundations were already beginning to form.
A major source of this destabilization was the Trump-Clinton election cycle, which, among other things, created a subtle but consequential shift in our relationship with the products coming out of Silicon Valley. Conservatives became wary of censorship on social media, liberals grew uneasy about revelations of foreign disinformation, and everyone found Cambridge Analytica’s strip-mining of Facebook data to be alarming. These bipartisan concerns led many exhausted partisans to mentally recategorize these tools. Where once they had seen platforms like Facebook as useful and in some sense mandatory, they started treating them more warily. This shift in perception might have seemed small at the time. But it was a seed, and from it grew an increasingly strident rejection of technopoly’s demand that we accommodate and embrace whatever innovation we’re told to be excited about next.
Today, we can see a fully formed version of this new attitude reflected on a few different fronts. Consider our current struggles to make sense of generative A.I. tools, such as OpenAI’s ChatGPT. The companies developing these models have been singing a hymn straight out of Postman, arguing that their advances are inevitable and that the best we can do is adapt to their presence. Last spring, when Sam Altman was pressed by Congress about the potential negative consequences of the technology, he steered the conversation away from why OpenAI was developing it and toward how the work was unfolding, proposing the creation of a new federal agency to enforce some vague notion of “safety standards.” But not everyone has simply capitulated. The Writers Guild of America, in its new contract with the Alliance of Motion Picture and Television Producers, has won strong constraints on how A.I. tools can be used in writers’ rooms. The W.G.A. insisted that, even if it might be technically possible to outsource parts of screenwriting to large language models, we aren’t obligated to follow this path toward increasing automation.
The Authors Guild’s recent class-action lawsuit against OpenAI, seeking “redress” for the company’s unauthorized use of copyrighted books to help train its A.I. models, also rejects technopoly thinking. If successful, the lawsuit may force A.I. developers to remove certain copyrighted content from their training sets, limiting the ability of their models to mimic specific author styles and voices. “Generative A.I. is a vast new field for Silicon Valley’s longstanding exploitation of content providers,” the novelist Jonathan Franzen, a class representative in the suit, explained. But just because this potential for exploitation exists doesn’t mean it has to be acted on. What if we simply decided to leave professional creative writing to humans?
Concerns about the mental-health impacts of social media on teen-agers have created another rupture with technopoly. In the past, the standard response to well-founded worries about teen social-media use has been to suggest the adoption of even more complex technology—to filter harmful content, say, or minimize addictive design features. A bipartisan bill titled the Kids Online Safety Act (kosa), introduced last year by Senators Richard Blumenthal and Marsha Blackburn, provides an example of this type of thinking in its proposed suite of complicated technical fixes and regulations, designed, in theory, to make social media safer for kids. In doing so, it accepts the premise that the platforms have become an inevitable part of childhood, leaving us the task of figuring out how to accommodate them. But not everyone has heeded the demand to accept and adapt. As kosa was being debated, Vivek Murthy, the U.S. Surgeon General, began suggesting that perhaps we should just stop letting kids use these services altogether. “If parents can band together and say, you know, as a group, we’re not going to allow our kids to use social media until sixteen or seventeen or eighteen, or whatever age they choose, that’s a much more effective strategy,” he said, in a CNN interview. Jean Twenge, a professor of psychology at San Diego State University, who is responsible for some of the early research pointing to the harms of social media, supports giving Murthy’s proposal some regulatory teeth. Speaking to the Washington Post, she said, “Let’s enforce the existing minimum of thirteen”—the age limit often listed by social-media platforms, and specified by the Children’s Online Privacy Protection Act of 1998. “Even better, raise the minimum age to sixteen. I think that would make the biggest difference.”
This emerging resistance to the technopoly mind-set doesn’t fall neatly onto a spectrum with techno-optimism at one end and techno-skepticism at the other. Instead, it occupies an orthogonal dimension we might call techno-selectionism. This is a perspective that accepts the idea that innovations can significantly improve our lives but also holds that we can build new things without having to accept every popular invention as inevitable. Techno-selectionists believe that we should continue to encourage and reward people who experiment with what comes next. But they also know that some experiments end up causing more bad than good. Techno-selectionists can be enthusiastic about artificial intelligence, say, while also taking a strong stance on settings where we should block its use. They can marvel at the benefits of the social Internet without surrendering their kids’ mental lives to TikTok.
There’s a utilitarian logic to this approach. The impact of technologies on our well-being can be both significant and unpredictable. The technopoly mind-set holds that we should accept the positive impacts of new tools alongside the negative ones, hoping that, over time, the positives will outweigh the negatives. This optimism might prove justified, but techno-selectionists think that we can do better. If we aggressively repudiate the technologies that are clearly causing net harm, while continuing to embrace those that seem to be more beneficial, we can direct our techno-social evolution much more intentionally. Such attempts at curation—which can occur at every scale, from personal decisions to community norms and civic regulation—are unavoidably messy. Often, it’s necessary to forgo some positive developments in order to eliminate larger negative impacts. And curation can easily go wrong. What is widespread vaccine hesitancy, for example, if not a prominent example of techno-selectionism?
Yet these shortcomings don’t justify a status quo of meek adjustment. Just because a tool exists and is popular doesn’t mean that we’re stuck with it. Given the increasing reach and power of recent innovations, adopting this attitude might even have existential ramifications. In a world where a tool like TikTok can, seemingly out of nowhere, suddenly convince untold thousands of users that maybe Osama bin Laden wasn’t so bad, or in which new A.I. models can, in the span of only a year, introduce a distressingly human-like intelligence into the daily lives of millions, we have no other reasonable choice but to reassert autonomy over the role of technology in shaping our shared story. This requires a shift in thinking. Decades of living in a technopoly have taught us to feel shame in ever proposing to step back from the cutting edge. But, as in nature, productive evolution here depends as much on subtraction as addition.
In 2016, when I published my Op-Ed, many people weren’t yet ready to more aggressively curate the tools we allow in our lives. This seems to be changing. “Once a technology is admitted, it plays out its hand: it does what it is designed to do,” Postman wrote. “Our task is to understand what that design is—that is to say, when we admit a new technology to the culture, we must do so with our eyes wide open.” Our eyes are finally open. We’re left now to act on what we see. ♦
Cal Newport is a contributing writer for The New Yorker and an associate professor of computer science at Georgetown University.Newer articles
<p>A US judge has ruled against Donald Trump getting his hush money conviction thrown out on immunity grounds.</p>