Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

There Is a Solution to AI’s Existential Risk Problem

15 November 2024 at 12:11
AGI Artificial General Intelligence concept image

Technological progress can excite us, politics can infuriate us, and wars can mobilize us. But faced with the risk of human extinction that the rise of artificial intelligence is causing, we have remained surprisingly passive. In part, perhaps this was because there did not seem to be a solution. This is an idea I would like to challenge.

AI’s capabilities are ever-improving. Since the release of ChatGPT two years ago, hundreds of billions of dollars have poured into AI. These combined efforts will likely lead to Artificial General Intelligence (AGI), where machines have human-like cognition, perhaps within just a few years.

[time-brightcove not-tgx=”true”]

Hundreds of AI scientists think we might lose control over AI once it gets too capable, which could result in human extinction. So what can we do?

Read More: What Donald Trump’s Win Means For AI

The existential risk of AI has often been presented as extremely complex. A 2018 paper, for example, called the development of safe human-level AI a “super wicked problem.” This perceived difficulty had much to do with the proposed solution of AI alignment, which entails making superhuman AI act according to humanity’s values. AI alignment, however, was a problematic solution from the start.

First, scientific progress in alignment has been much slower than progress in AI itself. Second, the philosophical question of which values to align a superintelligence to is incredibly fraught. Third, it is not at all obvious that alignment, even if successful, would be a solution to AI’s existential risk. Having one friendly AI does not necessarily stop other unfriendly ones.

Because of these issues, many have urged technology companies not to build any AI that humanity could lose control over. Some have gone further; activist groups such as PauseAI have indeed proposed an international treaty that would pause development globally.

That is not seen as politically palatable by many, since it may still take a long time before the missing pieces to AGI are filled in. And do we have to pause already, when this technology can also do a lot of good? Yann Lecun, AI chief at Meta and prominent existential risk skeptic, says that the existential risk debate is like “worrying about turbojet safety in 1920.”

On the other hand, technology can leapfrog. If we get another breakthrough such as the transformer, a 2017 innovation which helped launch modern Large Language Models, perhaps we could reach AGI in a few months’ training time. That’s why a regulatory framework needs to be in place before then.

Fortunately, Nobel Laureate Geoffrey Hinton, Turing Award winner Yoshua Bengio, and many others have provided a piece of the solution. In a policy paper published in Science earlier this year, they recommended “if-then commitments”: commitments to be activated if and when red-line capabilities are found in frontier AI systems.

Building upon their work, we at the nonprofit Existential Risk Observatory propose a Conditional AI Safety Treaty. Signatory countries of this treaty, which should include at least the U.S. and China, would agree that once we get too close to loss of control they will halt any potentially unsafe training within their borders. Once the most powerful nations have signed this treaty, it is in their interest to verify each others’ compliance, and to make sure uncontrollable AI is not built elsewhere, either.

One outstanding question is at what point AI capabilities are too close to loss of control. We propose to delegate this question to the AI Safety Institutes set up in the U.K., U.S., China, and other countries. They have specialized model evaluation know-how, which can be developed further to answer this crucial question. Also, these institutes are public, making them independent from the mostly private AI development labs. The question of how close is too close to losing control will remain difficult, but someone will need to answer it, and the AI Safety Institutes are best positioned to do so.

We can mostly still get the benefits of AI under the Conditional AI Safety Treaty. All current AI is far below loss of control level, and will therefore be unaffected. Narrow AIs in the future that are suitable for a single task—such as climate modeling or finding new medicines—will be unaffected as well. Even more general AIs can still be developed, if labs can demonstrate to a regulator that their model has loss of control risk less than, say, 0.002% per year (the safety threshold we accept for nuclear reactors). Other AI thinkers, such as MIT professor Max Tegmark, Conjecture CEO Connor Leahy, and ControlAI director Andrea Miotti, are thinking in similar directions.

Fortunately, the existential risks posed by AI are recognized by many close to President-elect Donald Trump. His daughter Ivanka seems to see the urgency of the problem. Elon Musk, a critical Trump backer, has been outspoken about the civilizational risks for many years, and recently supported California’s legislative push to safety-test AI. Even the right-wing Tucker Carlson provided common-sense commentary when he said: “So I don’t know why we’re sitting back and allowing this to happen, if we really believe it will extinguish the human race or enslave the human race. Like, how can that be good?” For his part, Trump has expressed concern about the risks posed by AI, too.

The Conditional AI Safety Treaty could provide a solution to AI’s existential risk, while not unnecessarily obstructing AI development right now. Getting China and other countries to accept and enforce the treaty will no doubt be a major geopolitical challenge, but perhaps a Trump government is exactly what is needed to overcome it.

A solution to one of the toughest problems we face—the existential risk of AI—does exist. It is up to us whether we make it happen, or continue to go down the path toward possible human extinction.

Why a Technocracy Fails Young People

14 November 2024 at 17:03
Elon Musk Publicly Endorses Donald Trump

As a chaplain at Harvard and MIT, I have been particularly concerned when talking to young people, who hope to be the next generation of American leaders. What moral lessons should they draw from the 2024 election? Elite institutions like those I serve have, after all, spent generations teaching young people to pursue leadership and success above all else. And, well, the former-turned-next POTUS has become one of the most successful political leaders of this century.

[time-brightcove not-tgx=”true”]

The electoral resurrection of a convicted felon whose own former chief of staff, a former Marine Corps General no less, likened him to a fascist, requires far more than re-evaluation of Democratic Party policies. It demands a re-examination of our entire society’s ethical—and even spiritual—priorities.

It’s not that students on campuses like mine want to be the next Trump (though he did win a majority among white, male, college-educated voters). It is, however, common for them to idolize billionaire tech entrepreneurs like Elon Musk and Peter Thiel. Both Musk and Thiel factored significantly in Trump and Vance’s victory; both will be handsomely rewarded for their support.

But is a technocracy the best we can do as a model for living a meaningful life today? It is past time to recognize that the digital technologies with which many of us now interact from the moment we wake until the moment we drift into sleep (and often beyond that) have ceased to be mere “tools.” Just like we went from users to being the products by which companies like Facebook and Google make trillions in advertising revenue, we now have become the tools by which certain technologists can realize their grandest financial and political ambitions.

Policy reform alone—while necessary—won’t save us. But neither will tech figures like Musk or Theil. In fact, we need an alternative to an archetype that I like to call “The Drama of the Gifted Technologist,” of which Musk, Thiel, and other tech leaders have become avatars.

Based on the ideas of the noted 20th century psychologist Alice Miller, and on my observation of the inner lives of many of the world’s most gifted students, the “Drama of the Gifted Technologist” starts with the belief that one is only “enough,” or worthy of love and life, if one achieves extraordinary things, namely through leadership in tech or social media clout.

I’ve seen this “drama” become a kind of “official psychopathology of the Ivy League” and Silicon Valley. It began, in some ways, with the accumulation of “friends” on Facebook over a decade ago, to gain social relevance. And it has now graduated to become the psychological and even spiritual dynamic driving the current AI arms-race, also known as “accelerationism.” See for example the influential billionaire AI cheerleader VC Marc Andreessen’s famous “Techno-Optimist’s Manifesto,” which uses the phrase “we believe” 133 times, arguing that “any deceleration of AI will cost lives…” and that “AI that was prevented from existing is a form of murder.” Or Sam Altman’s urgent quest for trillions of dollars to create a world of AI “abundance,” consequences for the climate, democracy, or, say, biological weapons be damned. Or Thiel’s belief that one needs a “near-messianic attitude” to succeed in venture capital. Or young men’s hero worship of tech “genius” figures like Musk, who, as former Twitter owner Jack Dorsey said, is the “singular solution”: the one man to single-handedly take humanity beyond earth, “into the stars.

And why wouldn’t the drama of the gifted technologist appeal to young people? They live, after all, in a world so unequal, with a future so uncertain, that the fortunate few really do live lives of grandeur in comparison to the precarity and struggle others face.

Read More: Inside Elon Musk’s Struggle for the Future of AI

Still, some might dismiss these “ideas” as mere hype and bluster. I’d love to do so, too. But I’ve heard far too many “confessions” reminiscent of what famous AI “doomer” Eliezer Yudkowsky once said most starkly and alarmingly: that “ambitious people would rather destroy the world than never amount to anything.”

Of course, I’m not saying that the aspiring leaders I work with are feeling so worthless and undeserving that they put themselves on a straight path from aspirational tech leadership towards world-destruction. Plenty are wonderful human beings. But it doesn’t take many hollow young men to destroy, if not the whole world, then at least far too much of it. Ultimately, many gifted young adults are feeling extraordinarily normal feelings: Fear. Loneliness. Grief. But because their “drama” doesn’t permit them to simply be normal, they too often look for ways to dominate others, rather than connect with them in humble mutual solidarity.

In the spring of 2023, I sat and discussed all this over a long lunch with a group of about 20 soon-to-graduate students at Harvard’s Kennedy School of Government. The students, in many cases deeply anxious about their individual and collective futures, asked me to advise them on how to envision and build ethical, meaningful, and sustainable lives in a world in which technological (and climate) change was causing them a level of uncertainty that was destabilizing at best, debilitating at worst. I suggested they view themselves as having inherent worth and value, simply for existing. Hearing that, one of the students responded—with laudable honesty and forthrightness—that she found that idea laughable.

I don’t blame her for laughing. It truly can be hard to accept oneself unconditionally, at a time of so much dehumanization. Many students I meet find it much easier to simply work harder. Ironically, their belief that tech success and wealth will save them strikes me as a kind of “digital puritanism”: a secularized version of the original Puritanism that founded Harvard College in the 1630’s, in which you were either considered one of the world’s few true elites, bound for Heaven, or if not, your destiny was the fire and brimstone vision of Hell. Perhaps tech’s hierarchies aren’t quite as extreme as traditional Puritanism’s, which allowed no way to alter destiny, and where the famous “Protestant work ethic” was merely an indicator of one’s obvious predestination. But given the many ways in which today’s tech is worsening social inequality, the difference isn’t exactly huge.

The good news? Many reformers are actively working to make tech more humane.

Among those is MacArthur fellow and scholar of tech privacy Danielle Citron, an expert in online abuse, who told me she worries that gifted technologists can “…lose their way behind screens, because they don’t see the people whom they hurt.”

“To build a society for future cyborgs as one’s goal,” Citron continued, “suggests that these folks don’t have real, flesh and blood relationships…where we see each other in the way that Martin Buber…described.”

Buber, an influential Jewish philosopher whose career spanned the decades before and after the Holocaust, was best known for his idea, first fully expressed in his 1923 essay “I and Thou,” that human life finds its meaning in relationships, and that the world would be better if each of us imagined our flesh-and-blood connections with one another—rather than achievements or technologies—as the ultimate expression of our connection to the divine.

Indeed. I don’t happen to share Buber’s belief in a divine authority; I’m an atheist and humanist. But I share Buber’s faith in the sacredness of human interrelationship. And I honor any form of contemporary spiritual teaching, religious or not, that reminds us to place justice, and one another’s well-being, over ambition—or “winning.”

We are not digital beings. We are not chatbots, optimized for achievement and sent to conquer this country and then colonize the stars through infinite data accumulation. We are human beings who care deeply about one another because we care about ourselves. Our very existence, as people capable of loving and being loved, is what makes us worthy of the space we occupy, here in this country, on this planet, and on any other planet we may someday find ourselves inhabiting.

Kamala Harris Shouldn’t Just Embrace Crypto. She Must Help It Flourish

29 October 2024 at 13:00
Democratic President Nominee Kamala Harris Campaigns With Former First Lady Michelle Obama In Kalamazoo, MI

As a journalist, I try not to reveal personal opinions. But I’m breaking that rule today, because as an American, there’s something I, a progressive Democrat and a journalist who has covered crypto for more than nine years, have to speak up about.

[time-brightcove not-tgx=”true”]

The Democrats and more specifically, its progressive wing, are making a mistake being anti-crypto. Their opposition is threatening to not only turn the U.S. into a technological backwater, but also to imperil our country’s status as the world’s only superpower. Being hostile to crypto could chip away at the U.S. dollar’s dominance as the world’s major global reserve asset. Plus, progressive opposition isn’t even logical since crypto broadly aligns with progressive ideals. Most urgently, their stance could cost Vice President and Democratic nominee Kamala Harris the election and hand Donald Trump, who tried to overturn the 2020 election, the presidency.

Throughout her presidential run, Harris has made minimal statements on crypto. At a Wall Street fundraiser on Sept. 22, she said, “We will encourage innovative technologies like AI and digital assets, while protecting our consumers and investors.” At the Economic Club of Pittsburgh on Sept. 24, she said, “I will recommit the nation to global leadership in the sectors that will define the next century. We will … remain dominant in AI and quantum computing, blockchain and other emerging technologies.” She also pledged to create a regulatory framework for crypto, as part of her economic plan for Black men, since 20% of Black Americans own digital assets. While it’s promising that her few utterances on crypto were vaguely positive, she—as well as the Democratic party—should go further. Harris’s campaign—and hopefully her administration—should embrace crypto and help it flourish, so we don’t lose our decades-long edge in tech and the U.S. dollar retains its reserve currency status.

Over the last few years, I’ve watched with dismay as my party has attacked the technology that could help usher in the change it wishes to see. For an election that will be decided by inches and in which, according to Pew Research, 17% of U.S. adults have ever owned crypto, Democrats have let Trump come in and claim the issue as his own.

But crypto is not inherently partisan. Being against it is like being against the internet. Just as the internet, or a knife, or a dollar bill can be used for good or bad, crypto is also a neutral technology. I expect crypto will follow a similar trajectory to the internet: this small, fringe phenomenon will, over the next couple decades, become embedded in of our lives alongside the dollar and other financial assets, the way that email and text messages are more central to our existence than snail mail.

Read more: Crypto Is Pouring Cash Into the 2024 Elections. Will It Pay Off?

In fact, Democratic antipathy has turned many lifelong Democrats who work in crypto into Trump supporters, since the Biden administration has put their livelihoods at risk and treated these entrepreneurs as criminals. Watching this saga gives me Blockbuster-Netflix deja vu.

Democratic leadership fails to understand that crypto has the potential to usher in a progressive era, through the technology itself. A blockchain, at its core, is a collectively maintained ledger of every transaction involving its coin. Imagine if you lived in a village whose financial system consisted of every villager gathering in the town square every day at noon and calling out their transactions since the day before. For each expense, such as, “I paid the baker $10,” every villager would log it in their own ledger.

No single person or entity like a bank would be in charge of keeping what would be considered the authoritative record. Instead, we would agree the only correct ledger did not exist physically but would be the one represented by the majority of the ledgers. This is like Bitcoin, except swap the villagers for computers around the globe running the Bitcoin software, managed by anyone contributing to this transparent, community-run financial system.

Of course, people should be paid to keep these ledgers. But instead of hiring employees, Bitcoin’s software mints new coins every time a new “block” of transactions is added to the ledger. (In Bitcoin, this happens, on average, every 10 minutes, unlike the village’s daily cadence.) The people maintaining the ledgers are incentivized by the opportunity to win those new bitcoins, which is how they “get paid.”

What a marvel: In the last 16 years, through this combination of cryptography, decentralization, and incentives, Bitcoin went from obscurity to surpassing a $1 trillion market cap, which only Apple, Amazon, Nvidia, Meta, Alphabet, Microsoft, and Berkshire Hathaway have done—except those entities did so with a board, C suite and employees. Meanwhile, Bitcoin, a grassroots phenomenon, attracted “workers” via incentives.

This is the core of crypto’s decentralization concept—one that could take on big banks and big tech. More services beyond an “electronic peer to peer cash system,” as Bitcoin creator Satoshi Nakamoto described it, can be offered on the internet in a decentralized way, with a cryptographic token and well-designed incentives.

Though this is the ideal, crypto has seen numerous scams and frauds—for example, FTX, OneCoin, and numerous “pig butchering” schemes in which the tricksters ensnare their victims in emotional relationships and dupe them into handing over money. But as U.S. attorney Damian Williams, whose office prosecuted FTX co-founder Sam Bankman-Fried, said, “[W]hile the cryptocurrency industry might be new and the players like Sam Bankman-Fried might be new, this kind of corruption is as old as time.” Similarly to how no one advised investors to avoid stocks after Bernie Madoff’s Ponzi scheme, the usage of crypto to perpetrate scams and frauds doesn’t mean everyone should shun crypto.

Read More: Inside Sam Bankman-Fried’s Attempted Conquest of Washington

Unfortunately, the U.S. has regulated the industry so poorly that crypto entrepreneurs have left the U.S. and cut Americans off from their projects. For example, Polymarket, a prediction market, is off-limits to Americans and U.S. residents. The way that there’s the internet and then China’s own censored version, now the world has a burgeoning crypto economy, while the U.S. has a censored crypto landscape. Frequently, crypto projects will list blocked countries and name the U.S. alongside the likes of North Korea, Cuba, Iran, China, and Russia. Not the typical company the U.S. keeps. Already, the U.S. is becoming a technological backwater.

Furthermore, the way the SEC under President Joe Biden’s appointed chair, Gary Gensler, has regulated crypto has been egregiously unfair. In 2021, after his appointment, Gensler wrote a letter to Sen. Elizabeth Warren, saying the SEC did not have the authority to regulate crypto. Although such power was never granted, the SEC began applying decades-old regulations to crypto by targeting entrepreneurs with “enforcement actions” or punishments for infractions of rules designed for a very different type of financial system. I’m not talking about cases of fraud, such as FTX, which regulators should rightly pursue, but incidents such as when the SEC sued Coinbase, which has aimed to be compliant from its earliest days, for not registering as a securities exchange. The dirty secret is that while Gensler often says crypto companies should “come in and register,” the SEC has not made that possible.

Judges are calling out the SEC. When the agency blocked crypto companies’ applications to offer bitcoin exchange-traded funds (ETFs) for so long that a then-potential issuer sued the SEC, a panel of three judges—two appointed by Democratic presidents—unanimously sided with the plaintiff, calling the SEC’s reasoning for not approving the ETFs “arbitrary and capricious.” In March 2024, in a case against the crypto project DebtBox, Judge Robert Shelby in Utah excoriated the SEC for what he called “a gross abuse of power,” in a judgment that noted multiple instances of SEC lawyers lying to the court. Judge Shelby levied a $1.8 million fine against the agency, which closed its Salt Lake City office.

Its poor win/loss record on crypto cases shows how the SEC and Chair Gensler gave Donald Trump a layup to take Democratic votes. At a speech Trump gave at the July Bitcoin 2024 conference, he promised the crypto community common sense things a reasonable regulator already would have done. Which pledge got a standing ovation? That on day one, he would fire Gensler.

Some Democrats have clued in that their approach to the industry may cost them this election. Both Senate Majority Leader Chuck Schumer and Speaker Emerita Nancy Pelosi, along with dozens of other Democrats in the House and Senate, broke party lines to vote for pro-crypto bills. But it may be too little too late. Members of the crypto community now regularly criticize the SEC, the Biden administration, Vice President Harris, Senator Warren, and Gensler, while advocating for a Trump presidency.

Most importantly, being anti-crypto has geopolitical implications. If Harris wins without signaling a complete about face from the Biden administration’s anti-crypto approach, China could use this technology to gain more power over developing regions like Africa, which would further erode the dominance of the U.S. dollar. China already has launched a digital yuan and created hundreds of blockchain-based initiatives. One could see them, say, requiring African business partners, especially as part of its Belt and Road initiative, to transact in the digital yuan, which would mean these businesses and countries could begin holding the digital yuan like a reserve currency. While the yuan accounts for only 4.69% of global reserves, it is growing quickly—it has increased by more than a full percentage point in the last year.

If it continues to gain a toehold, it will be a total own goal by the U.S., because, crypto has, so far, actually reinforced the primacy of the U.S. dollar abroad. According to the European Central Bank, 99% of so-called stablecoins, whose value is pegged to that of another asset, are tied to the worth of the U.S. dollar. These crypto dollars, $170 billion worth in circulation, are now gaining adoption in countries that have a weak currency or poor financial systems.

Citizens of, say, Argentina or Afghanistan grasp the promise of crypto more easily than Americans, who enjoy a stable currency and safe and robust financial systems. Roya Mahboob, the Afghan entrepreneur whose girls robotics team made news in 2017, told me that Bitcoin was the solution in 2013 when her blogging platform faced challenges paying its women bloggers who didn’t have bank accounts or whose payments would get confiscated by male relatives. Argentine entrepreneur Wences Casares has shared with me that a non-governmental money outside the control of the banks would have helped his family, who lost their life savings multiple times during periods of Argentina’s hyperinflation. Surely the notion of a financial instrument and technology benefiting underserved populations resonates with progressive and liberal values.

Liberals are often concerned about crypto’s environmental impact. This is an issue primarily with Bitcoin, the main crypto asset whose security model is “proof of work,” which requires the burning of electricity. In September 2022, Ethereum, the second-most popular coin, switched from “PoW” to a new method called “proof of stake,” which cut its electricity consumption by over 99%; this, or similar, methods are used by most new tokens launched nowadays.

Bitcoin miners can help renewable energy facilities, whose output is intermittent, resulting in inconsistent revenues, by evening out their revenue by purchasing excess energy at times when, say, wind is plentiful but demand is low, and by shutting off their facilities when usage is high. Secondly, renewable energy facilities can run miners themselves to earn Bitcoin when energy is plentiful but demand is low.

Read More: Fact-Checking 8 Claims About Crypto’s Climate Impact

When I started covering this technology almost a decade ago, there was optimism and open-mindedness about what it could do. While, in 2022, the industry saw many collapses like FTX (of centralized entities, not the ideal of what crypto can enable), U.S. agencies had been politicizing it long before then. A truly neutral regulator, or Congress, would have already created clear rules that would give American crypto entrepreneurs peace of mind that they wouldn’t be punished using decades-old laws whose application to crypto is unclear. Such directives would also make it easier for the public to separate legitimate entrepreneurial activity from scams and frauds. A head-in-the-sand approach from our leaders only gives countries like China an opportunity to encroach on the U.S.’s power. As our nation and American companies like Apple, Google, Facebook, Amazon, and Netflix did with the internet, let’s be leaders in this new frontier of innovation.

If liberals, progressives, and Democrats take a fresh look at crypto without preconceived judgments, they will see much that aligns with their ideals. I urge Vice President Harris to reject the politicization of this world-changing technology, and instead, embrace it to help propel her to victory.

Silicon Valley Takes Artificial General Intelligence Seriously—Washington Must Too

18 October 2024 at 11:10
3D Render of Futuristic  AI Data Icon Glass Cubes. AI Innovation and Cloud Technology

Artificial General Intelligence—machines that can learn and perform any cognitive task that a human can—has long been relegated to the realm of science fiction. But recent developments show that AGI is no longer a distant speculation; it’s an impending reality that demands our immediate attention.

On Sept. 17, during a Senate Judiciary Subcommittee hearing titled “Oversight of AI: Insiders’ Perspectives,” whistleblowers from leading AI companies sounded the alarm on the rapid advancement toward AGI and the glaring lack of oversight. Helen Toner, a former board member of OpenAI and director of strategy at Georgetown University’s Center for Security and Emerging Technology, testified that, “The biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence.” She continued that leading AI companies such as OpenAI, Google, and Anthropic are “treating building AGI as an entirely serious goal.”

[time-brightcove not-tgx=”true”]

Toner’s co-witness William Saunders—a former researcher at OpenAI who recently resigned after losing faith in OpenAI acting responsibly—echoed similar sentiments to Toner, testifying that, “Companies like OpenAI are working towards building artificial general intelligence” and that “they are raising billions of dollars towards this goal.”

Read More: When Might AI Outsmart Us? It Depends Who You Ask

All three leading AI labs—OpenAI, Anthropic, and Google DeepMind—are more or less explicit about their AGI goals. OpenAI’s mission states: “To ensure that artificial general intelligence—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” Anthropic focuses on “building reliable, interpretable, and steerable AI systems,” aiming for “safe AGI.” Google DeepMind aspires “to solve intelligence” and then to use the resultant AI systems “to solve everything else,” with co-founder Shane Legg stating unequivocally that he expects “human-level AI will be passed in the mid-2020s.” New entrants into the AI race, such as Elon Musk’s xAI and Ilya Sutskever’s Safe Superintelligence Inc., are similarly focused on AGI.

Policymakers in Washington have mostly dismissed AGI as either marketing hype or a vague metaphorical device not meant to be taken literally. But last month’s hearing might have broken through in a way that previous discourse of AGI has not. Senator Josh Hawley (R-MO), Ranking Member of the subcommittee, commented that the witnesses are “folks who have been inside [AI] companies, who have worked on these technologies, who have seen them firsthand, and I might just observe don’t have quite the vested interest in painting that rosy picture and cheerleading in the same way that [AI company] executives have.”

Senator Richard Blumenthal (D-CT), the subcommittee Chair, was even more direct. “The idea that AGI might in 10 or 20 years be smarter or at least as smart as human beings is no longer that far out in the future. It’s very far from science fiction. It’s here and now—one to three years has been the latest prediction,” he said. He didn’t mince words about where responsibility lies: “What we should learn from social media, that experience is, don’t trust Big Tech.”

The apparent shift in Washington reflects public opinion that has been more willing to entertain the possibility of AGI’s imminence. In a July 2023 survey conducted by the AI Policy Institute, the majority of Americans said they thought AGI would be developed “within the next 5 years.” Some 82% of respondents also said we should “go slowly and deliberately” in AI development.

That’s because the stakes are astronomical. Saunders detailed that AGI could lead to cyberattacks or the creation of “novel biological weapons,” and Toner warned that many leading AI figures believe that in a worst-case scenario AGI “could lead to literal human extinction.”

Despite these stakes, the U.S. has instituted almost no regulatory oversight over the companies racing toward AGI. So where does this leave us?

First, Washington needs to start taking AGI seriously. The potential risks are too great to ignore. Even in a good scenario, AGI could upend economies and displace millions of jobs, requiring society to adapt. In a bad scenario, AGI could become uncontrollable.

Second, we must establish regulatory guardrails for powerful AI systems. Regulation should involve government transparency into what’s going on with the most powerful AI systems that are being created by tech companies. Government transparency will reduce the chances that society is caught flat-footed by a tech company developing AGI before anyone else is expecting. And mandated security measures are needed to prevent U.S. adversaries and other bad actors from stealing AGI systems from U.S. companies. These light-touch measures would be sensible even if AGI weren’t a possibility, but the prospect of AGI heightens their importance.

Read More: What an American Approach to AI Regulation Should Look Like

In a particularly concerning part of Saunders’ testimony, he said that during his time at OpenAI there were long stretches where he or hundreds of other employees would be able to “bypass access controls and steal the company’s most advanced AI systems, including GPT-4.” This lax attitude toward security is bad enough for U.S. competitiveness today, but it is an absolutely unacceptable way to treat systems on the path to AGI. The comments were another powerful reminder that tech companies cannot be trusted to self-regulate.

Finally, public engagement is essential. AGI isn’t just a technical issue; it’s a societal one. The public must be informed and involved in discussions about how AGI could impact all of our lives.

No one knows how long we have until AGI—what Senator Blumenthal referred to as “the 64 billion dollar question”—but the window for action may be rapidly closing. Some AI figures including Saunders think it may be in as little as three years.

Ignoring the potentially imminent challenges of AGI won’t make them disappear. It’s time for policymakers to begin to get their heads out of the cloud.

I Launched the AI Safety Clock. Here’s What It Tells Us About Existential Risks

13 October 2024 at 11:00
Wall clock in office desk with big sunset sun light effect.

If uncontrolled artificial general intelligence—or “God-like” AI—is looming on the horizon, we are now about halfway there. Every day, the clock ticks closer to a potential doomsday scenario.

That’s why I introduced the AI Safety Clock last month. My goal is simple: I want to make clear that the dangers of uncontrolled AGI are real and present. The Clock’s current reading—29 minutes to midnight—is a measure of just how close we are to the critical tipping point where uncontrolled AGI could bring about existential risks. While no catastrophic harm has happened yet, the breakneck speed of AI development and the complexities of regulation mean that all stakeholders must stay alert and engaged.

[time-brightcove not-tgx=”true”]

This is not alarmism; it’s based on hard data. The AI Safety Clock tracks three essential factors: the growing sophistication of AI technologies, their increasing autonomy, and their integration with physical systems. 

We are seeing remarkable strides across these three factors. The biggest are happening in machine learning and neural networks, with AI now outperforming humans in specific areas like image and speech recognition, mastering complex games like Go, and even passing tests such as business school exams and Amazon coding interviews.

Read More: Nobody Knows How to Safety-Test AI

Despite these advances, most AI systems today still depend on human direction, as noted by the Stanford Institute for Human-Centered Artificial Intelligence. They are built to perform narrowly defined tasks, guided by the data and instructions we provide.

That said, some AI systems are already showing signs of limited independence. Autonomous vehicles make real-time decisions about navigation and safety, while recommendation algorithms on platforms like YouTube and Amazon suggest content and products without human intervention. But we’re not at the point of full autonomy—there are still major hurdles, from ensuring safety and ethical oversight to dealing with the unpredictability of AI systems in unstructured environments.

At this moment, AI remains largely under human control. It hasn’t yet fully integrated into the critical systems that keep our world running—energy grids, financial markets, or military weapons—in a way that allows it to operate autonomously. But make no mistake, we are heading in that direction. AI-driven technologies are already making gains, particularly in the military with systems like autonomous drones, and in civilian sectors, where AI helps optimize energy consumption and assists with financial trading.

Once AI gets access to more critical infrastructures, the risks multiply. Imagine AI deciding to cut off a city’s power supply, manipulate financial markets, or deploy military weapons—all without any, or limited, human oversight. It’s a future we cannot afford to let materialize.

But it’s not just the doomsday scenarios we should fear. The darker side of AI’s capabilities is already making itself known. AI-powered misinformation campaigns are distorting public discourse and destabilizing democracies. A notorious example is the 2016 U.S. presidential election, during which Russia’s Internet Research Agency used automated bots on social media platforms to spread divisive and misleading content.

Deepfakes are also becoming a serious problem. In 2022, we saw a chilling example when a deepfake video of Ukrainian President Volodymyr Zelensky emerged, falsely portraying him calling for surrender during the Russian invasion. The aim was clear: to erode morale and sow confusion. These threats are not theoretical—they are happening right now, and if we don’t act, they will only become more sophisticated and harder to stop.

While AI advances at lightning speed, regulation has lagged behind. That is especially true in the U.S., where efforts to implement AI safety laws have been fragmented at best. Regulation has often been left to the states, leading to a patchwork of laws with varying effectiveness. There’s no cohesive national framework to govern AI development and deployment. California Governor Gavin Newsom’s recent decision to veto an AI safety bill, fearing it would hinder innovation and push tech companies elsewhere, only highlights how far behind policy is.

Read More: Regulating AI Is Easier Than You Think

We need a coordinated, global approach to AI regulation—an international body to monitor AGI development, similar to the International Atomic Energy Agency for nuclear technology. AI, much like nuclear power, is a borderless technology. If even one country develops AGI without the proper safeguards, the consequences could ripple across the world. We cannot let gaps in regulation expose the entire planet to catastrophic risks. This is where international cooperation becomes crucial. Without global agreements that set clear boundaries and ensure the safe development of AI, we risk an arms race toward disaster.

At the same time, we can’t turn a blind eye to the responsibilities of companies like Google, Microsoft, and OpenAI—firms at the forefront of AI development. Increasingly, there are concerns that the race for dominance in AI, driven by intense competition and commercial pressures, could overshadow the long-term risks. OpenAI has recently made headlines by shifting toward a for-profit structure.

Artificial intelligence pioneer Geoffrey Hinton’s warning about the race between Google and Microsoft was clear: “I don’t think they should scale this up more until they have understood whether they can control it.”

Part of the solution lies in building fail-safes into AI systems—“kill-switches,” or backdoors that would allow humans to intervene if an AI system starts behaving unpredictably. California’s AI safety law included provisions for this kind of safeguard. Such mechanisms need to be built into AI from the start, not added in as an afterthought.

There’s no denying the risks are real. We are on the brink of sharing our planet with machines that could match or even surpass human intelligence—whether that happens in one year or ten. But we are not helpless. The opportunity to guide AI development in the right direction is still very much within our grasp. We can secure a future where AI is a force for good.

But the clock is ticking.

The AI Revolution Is Coming for Your Non-Union Job

The American Federation of Musicians (AFM), a union

During this election cycle, we’ve heard a lot from the presidential candidates about the struggles of America’s workers and their families. Kamala Harris and Donald Trump each want to claim the mantle as the country’s pro-worker candidate. Accordingly, union leaders took the stage not only at the Democratic National Convention, as usual, but at the Republican convention too.  At the VP debate, J.D. Vance and Tim Walz offered competing views on how best to support workers.

[time-brightcove not-tgx=”true”]

Surprisingly, one economic issue the candidates have yet to address is one in which millions of voters have a great deal at stake: the looming impact of new generative artificial intelligence (GenAI) technologies on work and livelihoods. The candidates’ silence belies a stark reality: the next president will take office in a world already changed by GenAI—and heading for much greater disruption.

Our new research at Brookings shows why this requires urgent attention and why it matters to voters. In a new study using data provided by one of the leading AI developers, OpenAI, we analyzed over a thousand occupations for their likely exposure to GenAI and its growing capabilities. Overall, we find that some 30% of the workforce could see at least half of their work tasks impacted—though not necessarily automated fully—by today’s GenAI, while more than 85% of all workers could see at least 10% of their tasks impacted. Even more powerful models are planned for release soon, with those requiring minimal human oversight likely to follow.

America’s workers are smart. They are far more concerned about GenAI reshaping livelihoods than leaders in government and business have acknowledged so far. In a 2023 Pew Center survey, nearly two-thirds (62%) of adults say they believe GenAI will have a major impact on jobs and jobholders—mostly negative—over the next two decades.

Yet technology is not destiny. AI capabilities alone will not determine the future of work. Workers, rather, can shape the trajectory of AI’s impact on work—but only if they have a voice in the technology’s design and deployment.

Who will be most affected by GenAI? Most of us will probably be surprised. We tend to think of men in blue-collar, physical roles in factories and warehouses as the workers most exposed to automation, and frequently they have been, along with dock workers and others. Yet GenAI, and the related software systems it integrates with, turn these assumptions on their head: manually intensive blue-collar roles are likely to be least and last affected. The same applies to electricians, plumbers and other relatively well-paying skilled trades occupations boosted by the nation’s net zero transition and massive investments in infrastructure. Instead, it is knowledge work: creative occupations, and office-based roles that are most exposed to technologies like ChatGPT and DALL-E, at least in the near term.

It is also women, not men, who face the greatest risk of disruption and automation. This is especially true of women in middle-skill clerical roles—currently nearly 20 million jobs—that have long offered a measure of economic security for workers without advanced degrees, for example in roles such as HR assistant, legal secretary, bookkeeper, customer service agent, and many others. The stakes are high for this racially and ethnically diverse group of lower-middle-class women, many of whom risk falling into more precarious, lower-paid work if this work is displaced.

Read More: How AI Can Guide Us on the Path to Becoming the Best Versions of Ourselves

All of this raises the question of what it will take to make sure most workers gain, rather than lose, from AI’s uncanny and often impressive capabilities. To be sure, we can’t predict the speed and scale of future AI advances. But what is clear is that the design and deployment of generative AI technologies is moving far faster than our response to shaping it. Fundamental questions, which the next president and Congress will need to address, remain unanswered: How do we ensure workers can proactively shape AI’s design and deployment? What will it take to ensure workers benefit meaningfully from AI’s strengths? And what guardrails are needed for workers to avoid AI’s harmsas much as possible?

Here’s a key issue: Among the most pressing priorities for the next president to address is what we call the “Great Mismatch,” the reality that the occupations most likely to see disruptions from AI are also the least likely to employ workers who belong to a union or have other forms of voice and representation.

In an era of technological change, Americans are clear about the benefits of unions. According to new Gallup polling, 70% of Americans hold a positive view of unions—the highest approval in 60 years. And both Harris and Trump have aggressively courted unions in their campaigns. Yet in the sectors where GenAI is poised to create the most change, as few as 1% of workers benefit from union representation (the public sector workforce is a notable exception).

This stark mismatch poses a serious risk for workers. In 2023, Hollywood writers showed the country why collective worker power is so critical in an era of technological disruption. Concerned that technology like ChatGPT could threaten their livelihoods, thousands of writers went on strike for five months. By securing first-of-their-kind protections in the contract they negotiated with major studios, the writers set a historic precedent: it is now up to the writerswhether and how they use generative AI as a tool to assist and complement—not replace—them.

Writer Raphael Bob-Waksberg, creator of the show BoJack Horseman, said, of his union’s AI victories and what they could mean for other workers, “Workers are going to demand similar things in their industries, because this affects all different kinds of people … I think it’s going to require unions. I think you can create some guardrails around it and use political power and worker power to protect people.”

The lack of worker voice and influence over deployment of GenAI should be a core concern for workers and policymakers alike—but it should get employers’ attention too.

Research shows there are big benefits to companies from incorporating workers and their unique knowledge and insights into the design and rollout of new technologies, compared to top-down implementation. Which means there is a powerful business case for worker engagement.

For now, almost none of the developers and deployers of AI are engaging workers or viewing them as uniquely capable partners. To the contrary, at least in private, many business leaders convey a sense of inevitability at the mention of AI’s growing risks for workers and their livelihoods. It’s no secret that relentless pressure to maximize short-term earnings, especially for publicly traded companies, focuses many CEOs on cutting labor costs in every way possible. It remains to be seen whether the coming AI revolution will defy the fixation on “lean and mean” operations, which came to dominate American corporate strategy a generation ago.

Presidential elections offer voters a referendum on the past as well as the future, even if the latter is only partly visible for now. AI represents one of the great challenges of our time, posing both risks and opportunities for the American worker. The next president will need to help determine the policies, investments, guardrails and social protections—or lack of same—that will shape the future of work for millions of Americans. It’s time we learned whether the candidates for that office understand that.

Uncertainty Is Uncomfortable, and Technology Makes It Worse. That Doesn’t Have to Be a Bad Thing

16 September 2024 at 15:18

On July 19, 2024, a single-digit error in the software update of cybersecurity company CrowdStrike grounded international airlines, halted emergency medical treatments, and paralyzed global commerce. The expansive network that had enabled CrowdStrike to access information from over a trillion events every day and prevent more than 75,000 security breaches every year, had ironically introduced a new form of uncertainty of colossal significance. The impact of a seemingly minor error in the code was now at risk of being exponentially magnified by the network, unleashing the kind of global havoc we witnessed last July.

[time-brightcove not-tgx=”true”]

The very mechanism that had reduced the uncertainty of regular cyber threats had concurrently increased the unpredictability of a rare global catastrophe—and with it, the deepening cracks in our relationship with uncertainty and technology.

Our deep-seated discomfort with uncertainty—a discomfort rooted not in just in technology but in our very biology—was vividly demonstrated in a 2017 experiment where London-based researchers gave consenting volunteers painful electric shocks to the hand while measuring physiological markers of distress. Knowing there was only 50-50 chance of receiving the shock agitated the volunteers far more than knowing the painful shock was imminent, highlighting how much more unsettling uncertainty can be compared to the certainty of discomfort.

This drive to eliminate uncertainty has long been a catalyst for technological progress and turned the wheels of innovation. From using fire to dispel the fear of darkness to mechanizing agriculture to guarantee food abundance, humanity’s innovations have consistently aimed to turn uncertainty into something controllable and predictable on a global scale.

Read More: Here’s Why Uncertainty Makes You So Miserable

But much like energy, uncertainty can be transformed but never destroyed. When we think we have removed it, we have merely shifted it to a different plane. This gives rise to the possibility of an intriguing paradox: With each technological advancement designed to reduce uncertainty, do we inadvertently introduce new uncertainties, making the world even more unpredictable?

 Automated algorithms have revolutionized financial trading at an astronomical scale by shattering human limits on speed, precision and accuracy. Yet, in the process of eliminating human error and decoding complex probabilities in foreign exchange trading, these systems have introduced new uncertainties of their own—uncertainties too intricate for human comprehension. What once plagued day-to-day trading with human-scale uncertainty has morphed into technology-scale risks that didn’t exist before. By lowering some forms of uncertainty, these automated algorithms have ultimately increased it. 

A striking example of this is algorithmic trading, where software is used to eradicate uncertainty and upgrade financial systems. It is, however, impossible to test sophisticated permutation of every pathway in a software decision tree, meaning that even the most sophisticated upgrades inevitably introduce new uncertainties. Subtle errors, camouflaged in labyrinthine webs of code, become imperceptible at the lightning speed of execution. In August 2012, when the NYSE’s Retail Liquidity Program went live, global financial services firm Knight Capital was equipped with a high-frequency trading algorithm. Unfortunately, an overnight glitch in the code amplified the error to a disastrous degree, costing Knight Capital $440 million in just 30 minutes.

As technology becomes more sophisticated, it not only eradicates the uncertainty of time and distance from our everyday lives but also transforms how we experience uncertainty itself. An app informs you exactly when the bus you are waiting for will arrive, a check mark tells you when your friend has not only received but read your message, and a ding lets you know someone is waiting on your doorstep when you are on vacation on a different continent. This information is often incredibly useful. Yet, the same technology floods us with unsolicited, irrelevant details. Worse, it often captures our attention by delivering fragments of incomplete information: a partial news headline pops up on our phone, an alert from our home security system reports unusual activity on our property, a new friend request slides into our social media inbox. Resolving these uncertainties requires us to swipe, click, or watch, only to be bombarded with yet another stream of incomplete information. Instead of resolving uncertainty, the information often leaves us with more of it.

Rarely do we stop to ask ourselves if the kinds of frequent, small-scale uncertainties that modern technology is designed to eliminate are really so terrible in the first place. If we did, we might realize that human-scale uncertainties make us more resilient, revealing weaknesses we did not know we had.

Historical evidence suggests that eliminating uncertainty isn’t always beneficial. Angkor, the medieval capital of the ancient Khmer empire, became the largest pre-industrial city in the world partly because its population was able to tame the uncertainty of nature through creating an elaborate water management network. This system eliminated the unpredictability of monsoon rains, sustaining Angkor’s agrarian population, which grew to nearly a million. Yet this very system may also have contributed to the city’s collapse. When Angkor was struck by severe droughts and violent monsoons in the 14th and 15th centuries, their reliance on guaranteed water supplies left its people vulnerable to disaster.

The uncertainty paradox does not stem from innovation in itself. Innovating solutions for large scale uncertainties has manifestly saved countless lives. Modern day examples include Sanitation technology that has helped to eradicate cholera in many parts of the world and Tuned Mass Damper (TMD) technology that protected the Taipei 101 skyscraper during a 7.4 magnitude earthquake in 2024. Instead, the uncertainty paradox seems to emerge when we seek to erase smaller scale,  everyday uncertainties entirely from our lives. This can make us more vulnerable, as we forget how to deal with unexpected uncertainty when it finally strikes. One solution is to deliberately create opportunities to experience and rehearse dealing with uncertainty. Hong Kong’s resilience in the face of intense typhoons stems from regular exposure to monsoon rains—preparing the city to withstand storms that could devastate other parts of the world.

Netflix engineers Yury Izrailevsky and Ariel Tseitlin captured this idea in their creation of “Chaos Monkey,” a tool that deliberately introduces system failures so engineers can identify weaknesses and build better recovery mechanisms. Inspired by this concept, many organizations now conduct “uncertainty drills” to prepare for unexpected challenges. However, while drills prepare us for the known scenarios, true resilience requires training our reactions to uncertainty itself—not just our responses to specific situations. Athletes and Navy SEALS incorporate deliberate worst-case scenarios in their training to build mental fortitude and adaptability in the face of the unknown.  

The relationship between uncertainty and technology is like an Ouroboros: we create technology to eliminate uncertainty, yet that technology generates new uncertainties that we must eliminate all over again. Rather than trying to break this cycle, the solution may be paradoxical: to make the world feel more certain, we might need to embrace a little more uncertainty every day.

What Google’s Antitrust Defeat Means for AI

29 August 2024 at 10:00
FINLAND-EU-AI-TECHNOLOGY-RIGHT

Google has officially been named a monopoly. On Aug. 5, a federal judge charged the tech giant with illegally using its market power to harm rival search engines, marking the first antitrust defeat for a major internet platform in more than 20 years—and thereby calling into question the business practices of Silicon Valley’s most powerful companies.

Many experts have speculated the landmark decision will make judges more receptive to antitrust action in other ongoing cases against the Big Tech platforms, especially with regards to the burgeoning AI industry. Today, the AI ecosystem is dominated by many of the same companies that the government is challenging in court, and those companies are using the same tactics to entrench their power in AI markets.

[time-brightcove not-tgx=”true”]

Judge Amit Mehta’s ruling in the Google case centered on the massive sums of money the company paid firms like Apple and Samsung to make its search engine the default on their smartphones and browsers. These “exclusive agreements” offered Google “access to scale that its rivals cannot match” and left other search engines “at a persistent competitive disadvantage,”  Judge Mehta wrote. By effectively “freezing” the existing search ecosystem in place, the payments “reduced the incentive to invest and innovate in search.” 

Today, a similar type of arrangement is cropping up in the AI sector. Companies like Google, Amazon, and Microsoft have cemented numerous partnerships in which developers agree to use—sometimes exclusively—the company’s cloud services in exchange for resources like cash and cloud credits. Given the high cost of computing hardware and developers’ incessant demand for this infrastructure, the tech giants can often negotiate additional concessions like equity, technology licenses, or profit sharing arrangements. Though these cloud partnerships are structured differently than the deals at issue in the Google case, they similarly serve to lock up revenue streams and possibly exclude disruptive rivals from lucrative distribution channels.

Big Tech companies are also using more traditional tactics to entrench their power in the AI market. In a forthcoming report, my colleagues at Georgetown University’s Center for Security and Emerging Technology and I found Apple, Microsoft, Google, Meta, and Amazon have collectively acquired at least 89 AI companies over the last decade, and those acquisitions tended to target younger startups, a signal that the tech giants may be targeting innovative AI firms before they pose a competitive threat. The companies’ integration across the AI supply chain also offers opportunities for self-preferencing and other problematic behaviors that they have allegedly used in other digital marketplaces.

Should the courts continue to rule against tech giants in ongoing antitrust cases, they would equip U.S. authorities with powerful ammunition to challenge the companies in the AI industry. Effective enforcement could help foster a new generation of startups looking to build types of responsible, socially beneficial AI tools that may not otherwise reach the market.

But while the Google decision opens the door for much-needed antitrust scrutiny in the AI industry, even the most effective enforcement regime cannot single-handedly foster a competitive AI sector. Antitrust suits take years to work through the courts, and even if judges find a company behaved illegally, it may be impossible to reverse its damage to competition and innovation. 

Consider the timeline of the Google case. Google made its first agreement with Apple in 2005, and the Justice Department did not bring its antitrust suit until 2020. Judge Mehta’s ruling earlier this summer did not the end of the matter either; it could take years to decide on remedies and complete the appeals process. And it is unclear whether any remedy will change internet search all that much.

Policymakers cannot wait so long when it comes to the AI market. Companies and governments are eager to adopt AI systems, and today it is virtually impossible to build and scale one of those tools without using infrastructure controlled by Big Tech companies. Giving the tech giants years to tighten their grip on the industry could permanently hamper AI startups’ ability to succeed and irrevocably undermine innovation.

If policymakers hope to keep the market for AI systems from becoming as stagnant and uncompetitive as that for search engines, they will need to use other tools. These may include regulating cloud platforms like utility companies and creating public infrastructure to offset developers’ reliance on private firms. Creative interventions like these, in addition to effective antitrust enforcement, will help maintain an open AI ecosystem that benefits us all rather than just Big Tech’s business models.

We will never know how many technological breakthroughs died on the vine thanks to Google’s monopoly over internet search. But with the right approach to competition policy, we can promote a healthier, more dynamic ecosystem for AI.

When AI Automates Relationships

14 August 2024 at 11:00

As we assess the risks of AI, we are overlooking a crucial threat. Critics commonly highlight three primary hazards—job disruption, bias, and surveillance/privacy. We hear that AI will cause many people to lose their jobs, from dermatologists to truck drivers to marketers. We hear how AI turns historical correlations into predictions that enforce inequality, so that sentencing algorithms predict more recidivism for Black men than white ones. We hear that apps help authorities watch people, such as Amazon tracking which drivers look away from the road.

[time-brightcove not-tgx=”true”]

What we are not talking about, however, is just as vital: What happens to human relationship when one side is mechanized? 

The conventional story of AI’s dangers is blinding us to its role in a cresting “depersonalization crisis.” If we are concerned about increasing loneliness and social fragmentation, then we should pay closer attention to the kind of human connections that we enable or impede. And those connections are being transformed by an influx of technology.

As a researcher of the impact of technology on relationships, I spent five years observing and talking to more than 100 people employed in humane interpersonal work like counseling or teaching, as well as the engineers automating and administrators overseeing it. I found that the injection of technology into relationships renders that work invisible, makes workers have to prove they are not robots, and encourages firms to overload them, compressing their labor into ever smaller increments of time and space. Most importantly, no matter how good the AI, there is no human relationship when one half of the encounter is a machine.

At the heart of this work is bearing witness. “I think each kid needs to be seen, like really seen,” Bert, a teacher and private school principal, told me. (All names in this article have been changed to protect privacy.) “I don’t think a kid really gets it on a deep level. I don’t think they are really bitten by the information or the content until they feel seen by the person they’re learning from.”

Many people depend on seeing the other clearly to make their contribution: clients healing, students learning, employees staying motivated and engaged, customers being satisfied. I came to call this witnessing work “connective labor,” and it both creates value and, for many, is profoundly meaningful. Pamela, an African-American teacher in the Bay Area, recalled how her own middle school teacher took the time to find out that her selective mutism was a response to her family moving incessantly. “I thought, ‘I want to be that teacher for my kids in this city. I want to be the teacher that I wanted, and that I needed, and that I finally got.’”

Yet this labor is nonetheless threatened by automation and AI. Even therapy, one of the professions most dependent on emotional connection between people, has seen inroads from automated bots, from Woebot to MARCo. As Michael Barbaro noted on The Daily when ChatGPT3 responded to his query about being too critical: “Ooh, I’m feeling seen—really seen!”

Read More: Do AI Systems Deserve Rights?

Technologists argue that socioemotional AI addresses problems of human performance, access and availability, which is a bit like the old joke about the guests at a Catskills resort complaining about the food being terrible—and such small portions!  It is certainly true that human connective labor is fraught, full of the risk of judgment and misrecognition—as Pamela repeatedly faced until she met the middle school teacher who finally listened to her. Yet the working conditions of connective labor shape people’s capacity to see the other.

“I don’t invite people to open up because I don’t have time,” said Jenna, a pediatrician. “And that is such a disservice to the patients. My hand is on the doorknob, I’m typing, I’m like, ‘Let’s get you the meds and get you out the door because I have a ton of other patients to see this morning.’”

Veronica demonstrates for us some of the costs of socioemotional AI. A young white woman in San Francisco, she was hired as an “online coach” for a therapy app startup, to help people interact with the app. She was prohibited from giving advice, but the clients seemed happy to think of the coaches as private counselors. “I loved feeling like I had an impact,” she said.

Yet, despite both the personal significance and emotional wallop of the work, Veronica’s own language joined in minimizing her effect. She “loved feeling like I had an impact,” but quickly followed that with “Even though I wasn’t really doing anything. I was just cheering them on and helping them work through some hard things sometimes.” Just as AI obscures the invisible armies of humans that label data or transcribe audio, it erases the connective labor of the human workers it relies upon to automate.

Veronica also found herself facing a new existential task: proving that she was human.  “A lot of people were like, ‘Are you a robot?’” she told me. I asked her how she countered that impression. “I basically just tried to small talk with them, ask another question, maybe share a little bit about myself if it was appropriate.”  In essence, Veronica’s connective labor—normally the quintessential human activity—was not enough to convey her humanness, which she had to verify for a clientele accustomed to machines.

Finally, Veronica may have found the work moving, humbling, and powerful, but she left because the firm increased the client roster to untenable levels. “Toward the end they were trying to model everything using algorithms, and it’s just like, you can’t account for the actual emotional burden of the job in those moments.” Already convinced the coaches were nothing but handmaidens to the app, the firm piled on new clients heedlessly. 

In the midst of a depersonalization crisis, “being seen” is already in too short supply. The sense of being invisible is widespread, animating working-class rage in the U.S. and abroad, and rife within the social crises of the “deaths of despair,” suicide and overdose deaths that have radically lowered life expectancy.

While many remain close to family and friends, there is one kind of relationship that has changed:  the “weak ties” of civic life and commerce.  Yet research shows that these ties help to knit together our communities and contribute to our health. A 2013 UK study entitled “Is Efficiency Overrated?” found that people who talked to their barista derived well-being benefits more than those who breezed right by them.

The solution that Big Tech offers to our depersonalization crisis is what they call personalization, as in personalized education or personalized health.  These advances seek to counter the alienating invisibility of standardization, so that we are “seen” by machines. But what if it is important—for us and for our social fabric—not just to be seen, but to be seen by other people? 

In that case, the working conditions of jobs like those of Bert, Jenna, and Veronica are consequential. Policies to limit client or student rosters and hours worked would help reduce overload for many groups, from medical residents to public school teachers to domestic workers, as would a National Domestic Workers Bill of Rights recently proposed in Congress.

We should also reign in some of the pervasive enthusiasm for data analytics, as its data entry requirements routinely fall on the very people charged with forging connections.  Just as important is the looming imposition of new technologies taking aim at connective labor. At the very least, socioemotional AI should be labeled as such so we know when we are talking to a robot, and can recognize—and choose—human-to-human connections.  Ultimately, however, we all need to take responsibility for protecting the human bonds in our midst, because these are the unheralded costs of the AI spring.

AI is often sold as a way of “freeing up” humans for other, often more meaningful work. Yet connective labor is among the most profoundly meaningful work humans do, and technologists are nonetheless gunning for it. While humans are imperfect and judgmental to be sure, we also know that human attention and care are a source of purpose and dignity, the seeds of belongingness and bedrock of our communities; yet we tuck that knowledge away in service to an industry that contributes to our growing depersonalization. What is at risk is more than an individual or their job, it is our social cohesion—the connections that are a mutual achievement between and among humans.

The Real Future of Flying Cars

31 July 2024 at 11:00
Ehang's Flying Taxi

After 27 years of developing airliners, my involvement in electric aircraft started suddenly one afternoon in February 2017. I was asked to comment on the eHang 184, a Chinese passenger drone, which could in theory provide automated taxi services in Dubai. The oft-quoted part of the resulting article will probably appear in my obituary.

[time-brightcove not-tgx=”true”]

“Dr. Wright added that he would not be volunteering for an early flight. ‘I’d have to be taken on board kicking and screaming.’”

My first contact with Chinese flying cars, or electric vertical take-off and landing (eVTOL), proved to be indicative in the years since. China has flown high with the nascent technology. One of the biggest developments came in April when the Civil Aviation Administration of China (CAAC) awarded a “production certificate” to EHang’s EH216-S, the first time an eVTOL received such approval anywhere. The move opens the door to a commercial rollout. But other firms are also eyeing the skies. The CarryAll eVTOL from AutoFlight, another Chinese firm, obtained a “type certificate” in March from CAAC, a key step toward regulatory approval. Other homegrown Chinese competitors like XPeng and Vertaxi are also creating buzz.

Indeed, China today accounts for some 50% of the world’s eVTOL models. The government has also pledged to create economic “demonstration zones,” though details remain murky. 

Clearly, China is ahead in the eVTOL race. Why is that, and will this lead be sustained? To answer, we must consider the two great challenges facing all the competitors in the field: one posed by technology, the other by humans.

The first challenge is easily stated: new battery technology unlocked the eVTOL era but is now its greatest limitation. Batteries are still only capable of storing and delivering a small fraction of the energy of gasoline, our old friend and nemesis. Until another breakthrough in battery technology occurs, the industry will be limited to premium services in niche applications. Put another way, when and if a new battery wave breaks over the industry and scatters the competitors, the side that rides that wave will take the far greater prize. China is well positioned here, but perhaps the West, with its longer experience of conventional aircraft, could regain the lead.

Now to the second, more elusive, challenge. China is the undisputed king of small-scale consumer drones, but there is a vast gulf between them and conventional passenger aircraft. This is in a field dear to most of us every time we board a flight: reliability. How big is that gulf? The answer is about a factor of 1 million, and the methods and technology to bridge this “gap of six zeroes” are only won with decades of experience. Here the West is certainly ahead with its mature aviation industries and governing bodies.

So, can China jump this “gap of six zeroes”? Probably, in due time. Here I point to the example of the Comac C919, an airliner with an uncanny resemblance to the Airbus A320 and Boeing 737, the most intensely competitive segment of the airline market. The C919’s birth was long and painful, backed with the enormous weight of the Chinese state. Despite a bumpy ride, the C919 has survived and is now entering passenger service.

When it comes to eVTOLs, the new technologies of this brave new world are acting as a great leveller, with all sides scrambling to come up with a whole new set of questions that need to be asked, further eroding the West’s historic advantage in aviation. This point stretches further: the West’s history can sometimes act as a hindrance, as there is a well-founded temptation to try and assess these new machines in terms created around familiar designs such as helicopters and light aircraft, and China could use this as another opportunity to get ahead.

The idea of flying cars is great fun, of course, and I’m glad that I live in a world where these machines exist. But I do not think that they represent the future of mass personal air transport beyond a niche slightly larger than that of today’s helicopters for the rich. The industry is probably going to look more like our existing budget air travel, with “sub-regional” airlines operating from quite large public spaces, and looking less glamorous than I would like, with price searches on booking sites, lines, and baggage checks.

Finally, the tribulations of creating eVTOLs can distract from a related wave that exploits the same technologies and plays to China’s existing strengths: unmanned aviation.

The conflict in Ukraine has provided a brutal demonstration of the massive potential here, with attacks being carried out deep into both side’s territory. The combination of repurposing passenger eVTOLs for cargo-only transport is a sweet spot for electric aviation. I look forward to the flying delivery trucks of the future more than the taxis.

Watch this (air)space…

Everyone on the Internet Will Die. We Need a Plan for Their Data

31 July 2024 at 10:00

The internet is aging. As soon as the 2060s, there may be more dead than alive users on Facebook. Many of the platforms that are now part of society’s basic infrastructure face a similar prospect. What happens to these platforms—and their users—when they die, will become a critical battleground for the internet’s future, with major implications for global power relations. Yet we have done virtually zero preparation for it.

[time-brightcove not-tgx=”true”]

Back in 1997, when John Perry Barlow published his now legendary, “A declaration of the independence of cyberspace,” he boldly stated that the governments of the world—the “giants of flesh and steel” as he called them—had no dominion over cyberspace. The internet, he declared, was a “new home of Mind” beyond the flesh, where its young and tech-savvy citizens would never age nor decay. We now, of course, know better. But we still tend to see the internet as something that does not age, as if the stuff that happens there is part of a constant flow of novelty, somehow located beyond the material realm. We also tend to think of it as something that has largely to do with youth. In short, we see cyberspace as a space without time.

None of these things are true. We know that everyone using the internet will die, and that hundreds of millions, or even billions, will do so in the next three decades. We also know that this poses a serious threat to an economy based on targeted advertisement (the dead don’t click on any ads, but require server space nevertheless). Yet the tech giants appear to have no plan for what to do as their (undeniably material) servers are filled with dead user data. Since dead people generally lack data privacy rights, it may gain a new commercial value as training data for new AI models, or even be sold back to the descendents in a kind of “heritage as a service” deal. But the ethical aspects are thorny, and the financial bearing shaky.  

We also know that whomever ceases control over these data will wield enormous power over our future access to the past. Just consider that one person—Elon Musk, no less—now owns the entirety of the tweets that constitute(d) the #Metoo movement. The same is true for the millions of tweets of BLM, the Arab Spring, and the 2016 U.S. presidential election, to name but a few examples. When future historians will seek to understand their past, it is Musk and Mark Zuckerberg who will set the terms.

Experience (and sheer logic) also tells us that the platforms that dominate tech today will sooner or later fail and die. What will happen to the user data? Can it, like other assets, be auctioned to the highest bidder? Will it be used to train new algorithms to trace the users, or their descendants? A hypothetical failure of a DNA testing company that stores our most personal information on their servers is a chilling example. These questions show how the fate of our digital remains is inextricably entangled with the privacy of the living. Yet, as of today, their answers are few and vague, as if the thought of a tech giant dying were beyond our comprehension.

With stakes so high, it is important that we talk about how the internet should age. That we make some kind of plan for what is to become of the past generations with whom we increasingly share it. And that we make sure dying platforms can be dissolved in an orderly manner. Crafting such a plan is not something we can outsource to experts. There is no technological fix. For it is a fundamentally political and even philosophical task. The question(s) we must ask ourselves include how we want to live with the past and its inhabitants (the dead), which principles should guide our stewardship of the digital past, for how long (if at all) should our digital remains be accessible, and for what purpose? 

Today, these questions are almost completely outsourced to the market. The answer to each of them is whatever Big Tech thinks is going to be lucrative. That is not a responsible way of caring for an aging web. Instead, we must begin to think about the internet, and our stewardship over it, as a long-term intergenerational project. Just like globalization forced us all to become cosmopolitans (citizens of the universe) by breaking spatial boundaries, the aging of the internet compels us to become archeopolitans—citizens of an archive—by breaking down temporal boundaries. 

Ever since Barlow’s declaration, we have thought of cyberspace as something fundamentally new and independent of the past. “On behalf of the future, I ask you of the past to leave us alone” as Barlow put it. But by thinking of ourselves, the first digital generation, as archeopolitans, it becomes clear that it is we, the living, who are the newcomers. For the archives have always belonged to the dead. What is new about the online world is just that the living have moved in with them. 

Being a good archeopolitan is to recognize this status, and to take the intergenerational stewardship of the web seriously. The first step in doing so is to make sure there is at least some basic framework to govern how the internet, including its platforms and its users, can age and eventually die with dignity and without threatening the privacy of their descendants. I call upon the governments of the world, these giants of flesh and steel, to begin this task. Otherwise, within a decade, it may be too late.

AI Testing Mostly Uses English Right Now. That’s Risky

24 July 2024 at 11:00
In this photo illustration, the home page of the ChatGPT

Over the last year, governments, academia, and industry have invested considerable resources into investigating the harms of advanced AI. But one massive factor seems to be continuously overlooked: right now, AI’s primary tests and models are confined to English.

Advanced AI could be used in many languages to cause harm, but focusing primarily on English may leave us with only part of the answer. It also ignores those most vulnerable to its harms.

[time-brightcove not-tgx=”true”]

After the release of ChatGPT in November, 2022, AI developers expressed surprise at a capability displayed by the model: It could “speak” at least 80 languages, not just English. Over the last year, commentators have pointed out that GPT-4 outperforms Google Translate in dozens of languages. But this focus on English for testing leaves open the possibility that the evaluations may be neglecting capabilities of AI models that become more relevant for other languages. 

As half the world steps out to the ballot box this year, experts have echoed concerns about the capacity of AI systems to not only be “misinformation superspreaders,” but also its ability to threaten the integrity of elections. The threats here range from “deepfakes and voice cloning” to “identity manipulation and AI-produced fake news.” The recent release of “multi-models”—AI systems which can also speak, see, and hear everything you do—such as GPT-4o and Gemini Live by tech giants OpenAI and Google, seem poised to make this threat even worse. And yet, virtually all discussions on policy, including May’s historic AI Safety Summit in Seoul and the release of the long-anticipated AI Roadmap in the U.S. Senate, neglect non-English languages.

This is not just an issue of leaving some languages out over others. In the U.S., research has consistently demonstrated that English-as-a-Second-Language (ESL) communities, in this context predominantly Spanish-speaking, are more vulnerable to misinformation than English-as-a-Primary-Language (EPL) communities. Such results have been replicated for cases involving migrants generally, both in the United States and in Europe where refugees have been effective targets—and subjects—of these campaigns. To make matters worse, content moderation guardrails on social media sites—a likely fora for where such AI-generated falsehoods would proliferate—are heavily biased towards English. While 90% of Facebook’s users are outside the U.S. and Canada, the company’s content moderators just spent 13% of their working hours focusing on misinformation outside the U.S. The failure of social-media platforms to moderate hate speech in Myanmar, Ethiopia, and other countries in embroiled in conflict and instability further betrays the language gap in these efforts.

Even as policymakers, corporate executives and AI experts prepare to combat AI-generated misinformation, their efforts cast a shadow over those most likely to be targeted and vulnerable to such false campaigns, including immigrants and those living in the Global South.

Read More: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic

This discrepancy is even more concerning when it comes to the potential of AI systems to cause mass human casualties, for instance, by being employed to develop and launch a bio-weapon. In 2023, experts expressed fear that large language models (LLMs) could be used to synthesize and deploy pathogens of potential pandemic potential. Since then, a multitude of research papers investigated this problem have been published both from within and outside industry. A common finding of these reports is that the current generation of AI systems are as good as and not better than search engines like Google in providing malevolent actors with hazardous information that could be use to build bio-weapons. Research by leading AI company OpenAI yielded this finding in January 2024, followed by a report by the RAND Corporation which showed a similar result.

What is astonishing about these studies is the near-complete absence of testing in non-English languages. This is especially perplexing as most Western efforts to combat non-state actors are concentrated in regions of the world where English is rarely spoken as first language. The claim here is not that Pashto, Arabic, Russian, or other languages may yield more dangerous results than in English. The claim, instead, is simply that using these languages is a capability jump for non-state actors that are better versed in non-English languages.

Read More: How English’s Global Dominance Fails Us

LLMs are often better translators than traditional services. It is much easier for a terrorist to simply input their query into a LLM in a language of their choice and directly receive an answer in that language. The counterfactual point here, however, is relying on clunky search engines in their own language, using Google for their language queries (which often only yields results published on the internet in their language), or go through an arduous process of translation and re-translation to get English language information with the possibility of meanings being lost. Hence, AI systems are making non-state actors just as good as if they spoke fluent English. How much better that makes them is something we will find out in the months to come.

This notion—that advanced AI systems may provide results in any language as good as if asked in English—has a wide range of applications. Perhaps the most intuitive example here is “spearphishing,” targeting specific individuals using manipulative techniques to secure information or money from them. Since the popularization of the “Nigerian Prince” scam, experts posit a basic rule-of-thumb to protect yourself: If the message seems to be written in broken English with improper grammar chances, it’s a scam. Now such messages can be crafted by those who have no experience of English, simply by typing their prompt in their native language and receiving a fluent response in English. To boot,  this says nothing about how much AI systems may boost scams where the same non-English language is used in input and output.

It is clear that the “language question” in AI is of paramount importance, and there is much that can be done. This includes new guidelines and requirements for testing AI models from government and academic institutions, and pushing companies to develop new benchmarks for testing which may be less operable in non-English languages. Most importantly, it is vital that immigrants and those in the Global South be better integrated into these efforts. The coalitions working to keep the world safe from AI must start looking more like it.

The Promise and Peril of AI

25 June 2024 at 15:52
A robotic hand reaching out to a butterfly landing on its fingertip

In early 2023, following an international conference that included dialogue with China, the United States released a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. Yet the notion of “human control” itself is hazier than it might seem. If humans authorized a future AI system to “stop an incoming nuclear attack,” how much discretion should it have over how to do so? The challenge is that an AI general enough to successfully thwart such an attack could also be used for offensive purposes.

[time-brightcove not-tgx=”true”]

We need to recognize the fact that AI technologies are inherently dual-use. This is true even of systems already deployed. For instance, the very same drone that delivers medication to a hospital that is inaccessible by road during a rainy season could later carry an explosive to that same hospital. Keep in mind that military operations have for more than a decade been using drones so precise that they can send a missile through a particular window that is literally on the other side of the earth from its operators.

We also have to think through whether we would really want our side to observe a lethal autonomous weapons (LAW) ban if hostile military forces are not doing so. What if an enemy nation sent an AI-controlled contingent of advanced war machines to threaten your security? Wouldn’t you want your side to have an even more intelligent capability to defeat them and keep you safe? This is the primary reason that the “Campaign to Stop Killer Robots” has failed to gain major traction. As of 2024, all major military powers have declined to endorse the campaign, with the notable exception of China, which did so in 2018 but later clarified that it supported a ban on only use, not development—although even this is likely more for strategic and political reasons than moral ones, as autonomous weapons used by the United States and its allies could disadvantage Beijing militarily.

Further, what will “human” even ultimately mean in the context of control when, starting in the 2030s, we introduce a nonbiological addition to our own decision-making with brain–computer interfaces? That nonbiological component will only grow exponentially, while our biological intelligence will stay the same. And as we get to the late 2030s, our thinking itself will be largely nonbiological. Where will the human decision-making be when our own thoughts largely use nonbiological systems?

Instead of pinning our hopes on the unstable distinction between humans and AI, we should focus on how to make the AI systems safe and aligned with humanity’s wellbeing. In 2017, I attended the Asilomar Conference on Beneficial AI—a conference inspired by the successful biotechnology safety guidelines established at the 1975 Asilomar Conference on Recombinant DNA—to discuss how the world could safely use artificial intelligence. What resulted from the talks are the Asilomar AI Principles, some of which have already been very influential with AI labs and governments. For example, principle 7 (Failure Transparency: “If an AI system causes harm, it should be possible to ascertain why”) and principle 8 (Judicial Transparency: “Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority”) are closely reflected in both the voluntary commitments from leading tech giant in July 2023, and in President Biden’s executive order several months later.

Efforts to render AI decisions more comprehensible are important, but the basic problem is that, regardless of any explanation they provide, we simply won’t have the capacity to fully understand most of the decisions made by future superintelligent AI. If a Go-playing program, for instance, far beyond the best human were able to explain its strategic decisions, not even the best player in the world (without the assistance of a cybernetic enhancement) would entirely grasp them. One promising line of research aimed at reducing risks from opaque AI systems is “eliciting latent knowledge.” This project is trying to develop techniques that can ensure that if we ask an AI a question, it gives us all the relevant information it knows, instead of just telling us what it thinks we want to hear—which will be a growing risk as machine-learning systems become more powerful.

The Asilomar principles also laudably promote noncompetitive dynamics around AI development, notably principle 18 (AI Arms Race: “An arms race in lethal autonomous weapons should be avoided”) and principle 23 (Common Good: “Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”). Yet, because superintelligent AI could be a decisive advantage in warfare and bring tremendous economic benefits, military powers will have strong incentives to engage in an arms race for it. Not only does this worsen risks of misuse, but it also increases the chances that safety precautions around AI alignment could be neglected.

Read more: Don’t Fear Artificial Intelligence

It is very difficult to usefully restrict development of any fundamental AI capability, especially since the basic idea behind general intelligence is so broad. Yet there are encouraging signs that major governments are now taking the challenge seriously. Following the international AI Safety Summit 2023 in the UK, the Bletchley Declaration by 28 countries pledged to prioritize safe AI development. And already in 2024, the European Union passed the landmark EU AI Act regulating high-risk systems, and the United Nations adopted a historic resolution “to promote safe, secure and trustworthy artificial intelligence.” Much will depend on how such initiatives are actually implemented. Any early regulation will inevitably make mistakes. The key question is how quickly policymakers can learn and adapt.

One hopeful argument, which is based on the principle of the free market, is that each step toward superintelligence is subject to market acceptance. In other words, artificial general intelligence will be created by humans to solve real human problems, and there are strong incentives to optimize it for beneficial purposes. Since AI is emerging from a deeply integrated economic infrastructure, it will reflect our values, because in an important sense it will be us. We are already a human-machine civilization. Ultimately, the most important approach we can take to keep AI safe is to protect and improve on our human governance and social institutions. The best way to avoid destructive conflict in the future is to continue the advance of our ethical ideals, which has already profoundly reduced violence in recent centuries and decades.

AI is the pivotal technology that will allow us to meet the pressing challenges that confront us, including overcoming disease, poverty, environmental degradation, and all of our human frailties. We have a moral imperative to realize the promise of these new technologies while mitigating the peril. But it won’t be the first time we’ve succeeded in doing so.

When I was growing up, most people around me assumed that nuclear war was almost inevitable. The fact that our species found the wisdom to refrain from using these terrible weapons shines as an example of how we have it in our power to likewise use emerging biotechnology, nanotechnology, and superintelligent AI responsibly. We are not doomed to failure in controlling these perils.

Overall, we should be cautiously optimistic. While AI is creating new technical threats, it will also radically enhance our ability to deal with those threats. As for abuse, since these methods will enhance our intelligence regardless of our values, they can be used for both promise and peril. We should thus work toward a world where the powers of AI are broadly distributed, so that its effects reflect the values of humanity as a whole.

Adapted from The Singularity is Nearer: When We Merge With AI by Ray Kurzweil, published by Viking. Copyright © 2024 by Ray Kurzweil. Reprinted courtesy of Penguin Random House.

❌
❌