When Tristan Harris realised he wanted to warn the world about the dangers posed by social media, he created The Center for Humane Technology. It was 2018 and Harris, a former Google designer who’d worked on addictive digital tools, was terrified about what “app” culture was doing to our brains. So through his non-profit organisation and documentaries such as The Social Dilemma, he urged policymakers to pay attention.

Fast forward five years, and it is clear that policymakers did not listen. Despite social media companies trumpeting their improved oversight, controls remain lax. Meanwhile, as academics such as Jonathan Haidt have noted, the mental health of young people has deteriorated dramatically in tandem with the rise of social networks.

Last month Harris, along with his colleague Aza Raskin, gave a presentation about artificial intelligence to the Aspen Ideas Institute that was probably the most lucid and chilling AI tutorial that I have ever seen.

They frame the threat posed by AI as an extension of the dangers of social media. Specifically, Harris cites the metaphor of sci-fi movies to suggest that, “You can think of social media as really first contact between humanity and AI.” When AI was used for curation, “Humanity lost.”

Now, the “second contact” is occurring, as large machine-learning models such as ChatGPT not only curate content but create it. What makes this phase so scary, Harris argues, is that AI tools are proliferating faster than anything seen before, and “learning” from data how to acquire “emergent” skills that the AI researchers themselves did not expect. Machines have recently done things without being taught, as this magazine reported extensively back in April. As Jeff Dean, a senior Google executive, has observed, while there are now “dozens of examples of emergent abilities, there are currently few compelling explanations for why these capabilities emerge”.

It’s increasingly difficult for insiders to track exactly what is going on, let alone those on the outside. A senior figure at one big AI group recently told me that the problem is that there is no single system of code that is programming AI and which regulators could examine, since the machines constantly learn and relearn, making connections without trace.

Worryingly, self-learning can also cause AI to do unpredictable things, known as “hallucinations”, either because of a data glitch, a half-noticed human error in AI commands or some other factor we don’t yet understand. To take one example, Harris and Raskin cited how researchers training AI platforms to search for less toxic drug compounds discovered a tiny change in the instructions caused them to find 40,000 toxic molecules instead, including the VX nerve agent.

There’s also a more immediate risk worrying eminent tech figures such as Geoffrey Hinton, another former Google researcher: the fact that open-source AI tools make it easy for anyone to spew out misinformation that masquerades impressively as fact. As Harris laments, “We should all be concerned about the arms race in AI. I am not sure whether democracy can survive.”

Harris and Raskin deny that they want to shut down AI; Raskin himself is using it to promote human-animal communication. But they do want sensible curbs on its development. These include a moratorium on the release of any more cutting-edge open-source AI programs until guardrails are introduced; the introduction of draconian controls over the chips that power the machine-learning models (without which nothing can happen); proper legal frameworks to make companies liable if AI systems malfunction; and a public awareness campaign around misinformation ahead of the 2024 elections.

Other AI researchers I have spoken with recently seem to agree. Just after Harris and Raskin’s presentation in Aspen came a speech from James Manyika, a senior executive at Google, in which he pledged to collaborate with policymakers. Leaders of the four key companies developing AI systems — Microsoft, Google, OpenAI and Anthropic — have also met with the White House to discuss controls.

Yet the parallels with social media shows how hard it is for governments to challenge Big Tech, or to expect corporations to self-regulate while competing for market share. Right now, it is doubly difficult for Congress to impose controls on US technology when politicians think they are in an arms race with China.

I fervently hope Harris and Raskin’s AI campaign succeeds, but it will only do so with serious political will and strong voter pressure for change. It’s a task that involves all of us. Having botched things with social media, let’s hope we can do better with AI. There’s little time to lose.

Follow Gillian on Twitter @gilliantett and email her at gillian.tett@ft.com

Follow @FTMag on Twitter to find out about our latest stories first

Copyright The Financial Times Limited 2023. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments