Normal view

There are new articles available, click to refresh the page.
Yesterday — 20 September 2024Main stream

Creative Tool Makers Tout AI Innovation, Sidestep IATSE Job Security Concerns at IBC

20 September 2024 at 16:05
Exhibitors touting AI-driven creative tools at the International Broadcasting Convention, which wrapped earlier this week in Amsterdam with a reported 45,000 attendees from 170 countries, struck a delicate balance in their messaging. Companies such as Adobe and Avid emphasized continued innovation and a desire to empower creatives, while steering clear of anything that might suggest […]

Before yesterdayMain stream

Why Sam Altman Is Leaving OpenAI’s Safety Committee

17 September 2024 at 17:26
US-TECHNOLOGY-AI-MICROSOFT-COMPUTERS

OpenAI’s CEO Sam Altman is stepping down from the internal committee that the company created to advise its board on “critical safety and security” decisions amid the race to develop ever more powerful artificial intelligence technology.

The committee, formed in May, had been evaluating OpenAI’s processes and safeguards over a 90-day period. OpenAI published the committee’s recommendations following the assessment on Sept. 16. First on the list: establishing independent governance for safety and security.

[time-brightcove not-tgx=”true”]

As such, Altman, who, in addition to serving OpenAI’s board, oversees the company’s business operations in his role as CEO, will no longer serve on the safety committee. In line with the committee’s recommendations, OpenAI says the newly independent committee will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University, who joined OpenAI’s board in August. Other members of the committee will include OpenAI board members Quora co-founder and CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony Entertainment president Nicole Seligman. Along with Altman, OpenAI’s board chair Bret Taylor and several of the company’s technical and policy experts will also step down from the committee.

Read more: The TIME100 Most Influential People in AI 2024

The committee’s other recommendations include enhancing security measures, being transparent about OpenAI’s work, and unifying the company’s safety frameworks. It also said it would explore more opportunities to collaborate with external organizations, like those used to evaluate OpenAI’s recently released series of reasoning models o1 for dangerous capabilities.

The Safety and Security Committee is not OpenAI’s first stab at creating independent oversight. OpenAI’s for-profit arm, created in 2019, is controlled by a non-profit entity with a “majority independent” board, tasked with ensuring it acts in accordance with its mission of developing safe broadly beneficial artificial general intelligence (AGI)—a system that surpasses humans in most regards.

In November, OpenAI’s board fired Altman, saying that he had not been “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” After employees and investors revolted—and board member and company president Greg Brockman resigned—he was swiftly reinstated as CEO, and board members Helen Toner, Tasha McCauley, and Ilya Sutskever resigned. Brockman later returned as president of the company.

Read more: A Timeline of All the Recent Accusations Leveled at OpenAI and Sam Altman

The incident highlighted a key challenge for the rapidly growing company. Critics including Toner and McCauley argue that having a formally independent board isn’t enough of a counterbalance to the strong profit incentives the company faces. Earlier this month, Reuters reported that OpenAI’s ongoing fundraising efforts, which could catapult its valuation to $150 billion, might hinge on changing its corporate structure.

Toner and McCauley say board independence doesn’t go far enough and that governments must play an active role in regulating AI. “Even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable,” the former board members wrote in the Economist in May, reflecting on OpenAI’s November boardroom debacle. 

In the past, Altman has urged regulation of AI systems, but OpenAI also lobbied against California’s AI bill, which would mandate safety protocols for developers. Going against the company’s position, more than 30 current and former OpenAI employees have publicly supported the bill.

The Safety and Security Committee’s establishment in late May followed a particularly tumultuous month for OpenAI. Ilya Sutskever and Jan Leike, the two leaders of the company’s “superalignment” team, which focused on ensuring that if AI systems surpass human-level intelligence, they remain under human control, resigned. Leike accused OpenAI of prioritizing “shiny products” over safety in a post on X. The team was disbanded following their departure. The same month, OpenAI came under fire for asking departing employees to sign agreements that prevented them from criticizing the company or forfeit their vested equity. (OpenAI later said that these provisions had not and would not be enforced and that they would be removed from all exit paperwork going forward).

Music Producer Accused of Using AI Songs to Scam Streaming Platforms Out of $10 Million in Royalties

By: Gmaddaus
4 September 2024 at 20:12
A music producer was arrested Wednesday and charged with multiple felonies for allegedly scamming more than $10 million in royalties using hundreds of thousands of AI-generated songs. Michael Smith, 52, of Cornelius, N.C., is alleged to have created thousands of bot accounts on platforms like Spotify, Amazon Music and Apple Music. According to the indictment, […]

Exploring Science and Technology Advancements: Transforming Our World

3 September 2024 at 07:41

The field of science and technology is continually evolving, driving significant advancements that shape our world and improve our quality of life. These innovations span various domains, including healthcare, communication, transportation, and environmental sustainability. As an expert in Science and Education, I will delve into some of the most impactful recent advancements in science and…

Source

Exclusive: New Research Finds Stark Global Divide in Ownership of Powerful AI Chips

28 August 2024 at 12:00
The World AI Conference in Shanghai

When we think of the “cloud,” we often imagine data floating invisibly in the ether. But the reality is far more tangible: the cloud is located in huge buildings called data centers, filled with powerful, energy-hungry computer chips. Those chips, particularly graphics processing units (GPUs), have become a critical piece of infrastructure for the world of AI, as they are required to build and run powerful chatbots like ChatGPT.

[time-brightcove not-tgx=”true”]

As the number of things you can do with AI grows, so does the geopolitical importance of high-end chips—and where they are located in the world. The U.S. and China are competing to amass stockpiles, with Washington enacting sanctions aimed at preventing Beijing from buying the most cutting-edge varieties. But despite the stakes, there is a surprising lack of public data on where exactly the world’s AI chips are located.

A new peer-reviewed paper, shared exclusively with TIME ahead of its publication, aims to fill that gap. “We set out to find: Where is AI?” says Vili Lehdonvirta, the lead author of the paper and a professor at Oxford University’s Internet Institute. Their findings were stark: GPUs are highly concentrated in only 30 countries in the world, with the U.S. and China far out ahead. Much of the world lies in what the authors call “Compute Deserts:” areas where there are no GPUs for hire at all.

The finding has significant implications not only for the next generation of geopolitical competition, but for AI governance—or, which governments have the power to regulate how AI is built and deployed. “If the actual infrastructure that runs the AI, or on which the AI is trained, is on your territory, then you can enforce compliance,” says Lehdonvirta, who is also a professor of technology policy at Aalto University. Countries without jurisdiction over AI infrastructure have fewer legislative choices, he argues, leaving them subjected to a world shaped by others. “This has implications for which countries shape AI development as well as norms around what is good, safe, and beneficial AI,” says Boxi Wu, one of the paper’s authors.

The paper maps the physical locations of “public cloud GPU compute”—essentially, GPU clusters that are accessible for hire via the cloud businesses of major tech companies. But the research has some big limitations: it doesn’t count GPUs that are held by governments, for example, or in the private hands of tech companies for their use alone. And it doesn’t factor in non-GPU varieties of chips that are increasingly being used to train and run advanced AI. Lastly, it doesn’t count individual chips, but rather the number of compute “regions” (or groups of data centers containing those chips) that cloud businesses make available in each country.

Read More: How ‘Friendshoring’ Made Southeast Asia Pivotal to the AI Revolution

That’s not for want of trying. “GPU quantities and especially how they are distributed across [cloud] providers’ regions,” the paper notes, “are treated as highly confidential information.” Even with the paper’s limitations, its authors argue, the research is the closest up-to-date public estimate of where in the world the most advanced AI chips are located—and a good proxy for the elusive bigger picture.

The paper finds that the U.S. and China have by far the most public GPU clusters in the world. China leads the U.S. on the number of GPU-enabled regions overall, however the most advanced GPUs are highly concentrated in the United States. The U.S. has eight “regions” where H100 GPUs—the kind that are the subject of U.S. government sanctions on China—are available to hire. China has none. This does not mean that China has no H100s; it only means that cloud companies say they do not have any H100 GPUs located in China. There is a burgeoning black market in China for the restricted chips, the New York Times reported in August, citing intelligence officials and vendors who said that many millions of dollars worth of chips had been smuggled into China despite the sanctions.

The paper’s authors argue that the world can be divided into three categories: “Compute North,” where the most advanced chips are located; the “Compute South,” which has some older chips suited for running, but not training, AI systems; and “Compute Deserts,” where no chips are available for hire at all. The terms—which overlap to an extent with the fuzzy “Global North” and “Global South” concepts used by some development economists—are just an analogy intended to draw attention to the “global divisions” in AI compute, Lehdonvirta says. 

The risk of chips being so concentrated in rich economies, says Wu, is that countries in the global south may become reliant on AIs developed in the global north without having a say in how they work. 

It “mirrors existing patterns of global inequalities across the so-called Global North and South,” Wu says, and threatens to “entrench the economic, political and technological power of Compute North countries, with implications for Compute South countries’ agency in shaping AI research and development.”

Exclusive: Workers at Google DeepMind Push Company to Drop Military Contracts

22 August 2024 at 12:00
DeepMind Technologies Ltd. Chief Executive Officer Demis Hassabis Interview

Nearly 200 workers inside Google DeepMind, the company’s AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google’s own AI rules.

[time-brightcove not-tgx=”true”]

The letter is a sign of a growing dispute within Google between at least some workers in its AI division—which has pledged to never work on military technology—and its Cloud business, which has contracts to sell Google services, including AI developed inside DeepMind, to several governments and militaries including those of Israel and the United States. The signatures represent some 5% of DeepMind’s overall headcount—a small portion to be sure, but a significant level of worker unease for an industry where top machine learning talent is in high demand.

The DeepMind letter, dated May 16 of this year, begins by stating that workers are “concerned by recent reports of Google’s contracts with military organizations.” It does not refer to any specific militaries by name—saying “we emphasize that this letter is not about the geopolitics of any particular conflict.” But it links out to an April report in TIME which revealed that Google has a direct contract to supply cloud computing and AI services to the Israeli Military Defense, under a wider contract with Israel called Project Nimbus. The letter also links to other stories alleging that the Israeli military uses AI to carry out mass surveillance and target selection for its bombing campaign in Gaza, and that Israeli weapons firms are required by the government to buy cloud services from Google and Amazon.

Read More: Exclusive: Google Contract Shows Deal With Israel Defense Ministry

“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter that circulated inside Google DeepMind says. (Those principles state the company will not pursue applications of AI that are likely to cause “overall harm,” contribute to weapons or other technologies whose “principal purpose or implementation” is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.”) The letter says its signatories are concerned with “ensuring that Google’s AI Principles are upheld,” and adds: “We believe [DeepMind’s] leadership shares our concerns.”

A Google spokesperson told TIME: “When developing AI technologies and making them available to customers, we comply with our AI Principles, which outline our commitment to developing technology responsibly. We have been very clear that the Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy. This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.”

The letter calls on DeepMind’s leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter’s circulation, Google has done none of those things, according to four people with knowledge of the matter. “We have received no meaningful response from leadership,” one said, “and we are growing increasingly frustrated.”

Student encampment at UC Berkeley to demand end to Gaza war, divestment from Israel

When DeepMind was acquired by Google in 2014, the lab’s leaders extracted a major promise from the search giant: that their AI technology would never be used for military or surveillance purposes. For many years the London-based lab operated with a high degree of independence from Google’s California headquarters. But as the AI race heated up, DeepMind was drawn more tightly into Google proper. A bid by the lab’s leaders in 2021 to secure more autonomy failed, and in 2023 it merged with Google’s other AI team—Google Brain—bringing it closer to the heart of the tech giant. An independent ethics board that DeepMind leaders hoped would govern the uses of the AI lab’s technology ultimately met only once, and was soon replaced by an umbrella Google ethics policy: the AI Principles. While those principles promise that Google will not develop AI that is likely to cause “overall harm,” they explicitly allow the company to develop technologies that may cause harm if it concludes “that the benefits substantially outweigh the risks.” And they do not rule out selling Google’s AI to military clients.

As a result, DeepMind technology has been bundled into Google’s Cloud software and sold to militaries and governments, including Israel and its Ministry of Defense. “While DeepMind may have been unhappy to work on military AI or defense contracts in the past, I do think this isn’t really our decision any more,” one DeepMind employee told TIME in April, asking not to be named because they were not authorized to speak publicly. Several Google workers told TIME in April that for privacy reasons, the company has limited insights into government customers’ use of its infrastructure, meaning that it may be difficult, if not impossible, for Google to check if its acceptable use policy—which forbids users from using its products to engage in “violence that can cause death, serious harm, or injury”—is being broken.

Read More: Google Workers Revolt Over $1.2 Billion Contract With Israel

Google says that Project Nimbus, its contract with Israel, is not “directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” But that response “does not deny the allegations that its technology enables any form of violence or enables surveillance violating internationally accepted norms,” according to the letter that circulated within DeepMind in May. Google’s statement on Project Nimbus “is so specifically unspecific that we are all none the wiser on what it actually means,” one of the letter’s signatories told TIME. 

More from TIME

[video id=A8hZ67ye]

At a DeepMind town hall event in June, executives were asked to respond to the letter, according to three people with knowledge of the matter. DeepMind’s chief operating officer Lila Ibrahim answered the question. She told employees that DeepMind would not design or deploy any AI applications for weaponry or mass surveillance, and that Google Cloud customers were legally bound by the company’s terms of service and acceptable use policy, according to a set of notes taken during the meeting that were reviewed by TIME. Ibrahim added that she was proud of Google’s track record of advancing safe and responsible AI, and that it was the reason she chose to join, and stay at, the company.

How Will.i.am Is Trying to Reinvent Radio With AI

21 August 2024 at 17:06
Will.i.am

Will.i.am has been embracing innovative technology for years. Now he is using artificial intelligence in an effort to transform how we listen to the radio.

The musician, entrepreneur and tech investor has launched RAiDiO.FYI, a set of interactive radio stations themed around topics like sport, pop culture, and politics. Each station is fundamentally interactive: tune in and you’ll be welcomed by name by an AI host “live from the ether,” the Black Eyed Peas frontman tells TIME. Hosts talk about their given topic before playing some music. Unlike previous AI-driven musical products, such as Spotify’s AI DJ, RAiDiO.FYI permits two-way communication: At any point you can press a button to speak with the AI persona about whatever comes to mind.

[time-brightcove not-tgx=”true”]

The stations can be accessed on FYI—which stands for Focus Your Ideas—a communication and collaboration app created by FYI.AI, founded by will.i.am in 2020. Each station exists as a “project” within the app. All the relevant content, including the AI host’s script, music, and segments, are loaded in as a “mega prompt” from which the tool—powered by third-party large language models—can draw. AI personas also have limited web browsing capabilities and can pull information from trusted news sources.

“This is Act One”, will.i.am told TIME while demonstrating RAiDiO.FYI on Aug. 20, National Radio Day in the U.S. While most of the nine currently-available stations have been created by the FYI.AI team, “Act Two,” he says, involves partnerships with creators across the entertainment and media industries.

One such partnership has already been struck with the media platform Earn Your Leisure, to promote the organization’s upcoming “Invest Fest” conference in Atlanta. At the event, a “hyper-curated” station is intended to replace the traditional pamphlet or email that might contain all relevant conference information, like speaker profiles and the event’s lineup. Instead, will.i.am explains, the conference organizers can feed all those details—as well as further information, such as the content of speeches—to the AI station, where it can be interacted with directly.

Will.i.am envisions this idea of an interactive text-to-station applying broadly, beyond just radio and conferencing. “It could be learning for tutors and teachers. It could be books for authors. It could be podcast segments for podcasters. It can be whatever it is the owner of that project [wants] when we partner with them to create that station,” he says, emphasizing that the platform creates fresh possibilities for how people engage with content. “It’s liberating for the content maker, because if you had your sponsorships and your advertiser partners, you’re the one who’s deciding who’s on your broadcast.”

This is not will.i.am’s first foray into the world of AI. The artist, who has been using technology to make music for decades, started thinking seriously about the subject in 2004, when he was introduced to its possibilities by pioneering professor and AI expert Patrick Winston. In January, he launched a radio show on SiriusXM that he co-hosts with an AI called Qd.pi.

He also sits on the World Economic Forum’s Fourth Industrial Revolution Advisory Committee and regularly attends the organization’s annual meetings in Davos to discuss how technology shapes society. He previously served as chip manufacturer Intel’s director of creative innovation, and in 2009 launched the i.am Angel Foundation to support young people studying computer science and robotics in the Los Angeles neighborhood where he was raised.

[video id=5jEFRlMm autostart="viewable"]

The Black Eyed Peas’ 2010 music video for the song “Imma Be Rocking That Body” begins with a skit where will.i.am shows off futuristic technology that can replicate any artist’s voice, to his bandmates’ dismay. That technology is now possible today. FYI’s AI personas may still have the distinctive sound of an AI voice and—like most large language models—have the potential to be influenced by malicious prompting, yet they offer a glimpse into the future that is already here. And it won’t be long before it is not just the station hosts, but the music itself that is AI-generated, will.i.am says.

Harnessing the Power of Educational Technology

19 August 2024 at 08:14

Educational technology has revolutionized the way we learn and teach, providing innovative tools and resources that enhance the educational experience. From interactive apps to virtual reality, educational technology is transforming traditional classrooms and making learning more accessible and engaging. As an expert in Technology and Gadgets, I will explore the various facets of educational…

Source

Mark Zuckerberg Just Intensified the Battle for AI’s Future

24 July 2024 at 15:45
Meta CEO Mark Zuckerberg

The tech industry is currently embroiled in a heated debate over the future of AI: should powerful systems be open-source and freely accessible, or closed and tightly monitored for dangers?

On Tuesday, Meta CEO Mark Zuckerberg fired a salvo into this ongoing battle, publishing not just a new series of powerful AI models, but also a manifesto forcefully advocating for the open-source approach. The document, which was widely praised by venture capitalists and tech leaders like Elon Musk and Jack Dorsey, serves as both a philosophical treatise and a rallying cry for proponents of open-source AI development. It arrives as intensifying global efforts to regulate AI have galvanized resistance from open-source advocates, who see some of those potential laws as threats to innovation and accessibility.

[time-brightcove not-tgx=”true”]

At the heart of Meta’s announcement on Tuesday was the release of its latest generation of Llama large language models, the company’s answer to ChatGPT. The biggest of these new models, Meta claims, is the first open-source large language model to reach the so-called “frontier” of AI capabilities.

Meta has taken on a very different strategy with AI compared to its competitors OpenAI, Google DeepMind and Anthropic. Those companies sell access to their AIs through web browsers or interfaces known as APIs, a strategy that allows them to protect their intellectual property, monitor the use of their models, and bar bad actors from using them. By contrast, Meta has chosen to open-source the “weights,” or the underlying neural networks, of its Llama models—meaning they can be freely downloaded by anybody and run on their own machines. That strategy has put Meta’s competitors under financial pressure, and has won it many fans in the software world. But Meta’s strategy has also been criticized by many in the field of AI safety, who warn that open-sourcing powerful AI models has already led to societal harms like deepfakes, and could in future open a Pandora’s box of worse dangers.

In his manifesto, Zuckerberg argues most of those concerns are unfounded and frames Meta’s strategy as a democratizing force in AI development. “Open-source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” he writes. “It will make the world more prosperous and safer.” 

But while Zuckerberg’s letter presents Meta as on the side of progress, it is also a deft political move. Recent polling suggests that the American public would welcome laws that restrict the development of potentially-dangerous AI, even if it means hampering some innovation. And several pieces of AI legislation around the world, including the SB1047 bill in California, and the ENFORCE Act in Washington, D.C., would place limits on the kinds of systems that companies like Meta can open-source, due to safety concerns. Many of the venture capitalists and tech CEOs who celebrated Zuckerberg’s letter after its publication have in recent weeks mounted a growing campaign to shape public opinion against regulations that would constrain open-source AI releases. “This letter is part of a broader trend of some Silicon Valley CEOs and venture capitalists refusing to take responsibility for damages their AI technology may cause,” says Andrea Miotti, the executive director of AI safety group Control AI. “Including catastrophic outcomes.”


The philosophical underpinnings for Zuckerberg’s commitment to open-source, he writes, stem from his company’s long struggle against Apple, which via its iPhone operating system constrains what Meta can build, and which via its App Store takes a cut of Meta’s revenue. He argues that building an open ecosystem—in which Meta’s models become the industry standard due to their customizability and lack of constraints—will benefit both Meta and those who rely on its models, harming only rent-seeking companies who aim to lock in users. (Critics point out, however, that the Llama models, while more accessible than their competitors, still come with usage restrictions that fall short of true open-source principles.) Zuckerberg also argues that closed AI providers have a business model that relies on selling access to their systems—and suggests that their concerns about the dangers of open-source, including lobbying governments against it, may stem from this conflict of interest.

Addressing worries about safety, Zuckerberg writes that open-source AI will be better at addressing “unintentional” types of harm than the closed alternative, due to the nature of transparent systems being more open to scrutiny and improvement. “Historically, open-source software has been more secure for this reason,” he writes. As for intentional harm, like misuse by bad actors, Zuckerberg argues that “large-scale actors” with high compute resources, like companies and governments, will be able to use their own AI to police “less sophisticated actors” misusing open-source systems. “As long as everyone has access to similar generations of models—which open-source promotes—then governments and institutions with more compute resources will be able to check bad actors with less compute,” he writes.

But “not all ‘large actors’ are benevolent,” says Hamza Tariq Chaudhry, a U.S. policy specialist at the Future of Life Institute, a nonprofit focused on AI risk. “The most authoritarian states will likely repurpose models like Llama to perpetuate their power and commit injustices.” Chaudhry, who is originally from Pakistan, adds: “Coming from the Global South, I am acutely aware that AI-powered cyberattacks, disinformation campaigns and other harms pose a much greater danger to countries with nascent institutions and severe resource constraints, far away from Silicon Valley.”

Zuckerberg’s argument also doesn’t address a central worry held by many people concerned with AI safety: the risk that AI could create an “offense-defense asymmetry,” or in other words strengthen attackers while doing little to strengthen defenders. “Zuckerberg’s statements showcase a concerning disregard for basic security in Meta’s approach to AI,” says Miotti, the director of Control AI. “When dealing with catastrophic dangers, it’s a simple fact that offense needs only to get lucky once, but defense needs to get lucky every time. A virus can spread and kill in days, while deploying a treatment can take years.”

Later in his letter, Zuckerberg addresses other worries that open-source AI will allow China to gain access to the most powerful AI models, potentially harming U.S. national security interests. He says he believes that closing off models “will not work and will only disadvantage the U.S. and its allies.” China is good at espionage, he argues, adding that “most tech companies are far from” the level of security that would prevent China from being able to steal advanced AI model weights. “It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities,” he writes. “Plus, constraining American innovation to closed development increases the chance that we don’t lead at all.”

Miotti is unimpressed by the argument. “Zuckerberg admits that advanced AI technology is easily stolen by hostile actors,” he says, “but his solution is to just give it to them for free.”

Could AIs become conscious? Right now, we have no way to tell.

10 July 2024 at 11:00
Could AIs become conscious? Right now, we have no way to tell.

Enlarge (credit: BlackJack3D/Getty Images)

Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.

In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.

Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?

Read 50 remaining paragraphs | Comments

Exclusive: U.S. Voters Value Safe AI Development Over Racing Against China, Poll Shows

8 July 2024 at 16:46
President Biden Delivers Remarks On His Administration's Efforts To Safeguard The Development Of Artificial Intelligence

A large majority of American voters are skeptical of the argument that the U.S. should race ahead to build ever more powerful artificial intelligence, unconstrained by domestic regulations, in an effort to compete with China, according to new polling shared exclusively with TIME.

[time-brightcove not-tgx=”true”]

The findings indicate that American voters disagree with a common narrative levied by the tech industry, in which CEOs and lobbyists have repeatedly argued the U.S. must tread carefully with AI regulation in order to not hand the advantage to their geopolitical rival. And they reveal a startling level of bipartisan consensus on AI policy, with both Republicans and Democrats in support of the government placing some limits on AI development in favor of safety and national security.

According to the poll, 75% of Democrats and 75% of Republicans believe that “taking a careful controlled approach” to AI—by preventing the release of tools that terrorists and foreign adversaries could use against the U.S.—is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.” A majority of voters support more stringent security practices at AI companies, and are worried about the risk of China stealing their most powerful models, the poll shows. 

The poll was carried out in late June by the AI Policy Institute (AIPI), a U.S. nonprofit that advocates for “a more cautious path” in AI development. The findings show that 50% of voters believe the U.S. should use its advantage in the AI race to prevent any country from building a powerful AI system, by enforcing “safety restrictions and aggressive testing requirements.” That’s compared to just 23% who believe the U.S. should try to build powerful AI as fast as possible to outpace China and achieve a decisive advantage over Beijing.

The polling also suggests that voters may be broadly skeptical of “open-source” AI, or the view that tech companies should be allowed to release the source code of their powerful AI models. Some technologists argue that open-source AI encourages innovation and reduces the monopoly power of the biggest tech companies. But others say it is a recipe for danger as AI systems grow more powerful and unpredictable. 

“What I perceive from the polling is that stopping AI development is not seen as an option,” says Daniel Colson, the executive director of the AIPI. “But giving industry free rein is also seen as risky. And so there’s the desire for some third way. And when we present that in the polling—that third path, mitigated AI development with guardrails—is the one that people overwhelmingly want.” 

The survey also shows that 63% of American voters think it should be illegal to export powerful AI models to potential U.S. adversaries like China, including 73% of Republicans and 59% of Democrats. Just 14% of voters disagree. 

A sample of 1,040 Americans was interviewed for the survey, which was representative by education levels, gender, race, and the political parties for whom respondents cast their votes in the 2020 presidential election. The margin of error given for the results is 3.4% in both directions.

So far there has been no comprehensive AI regulation in the U.S., with the White House encouraging different government agencies to regulate the technology themselves where it falls under their existing remits. That strategy appears to have been put in jeopardy, however, by a recent Supreme Court ruling that limits the ability of federal agencies to apply broad-brushstroke rules set by Congress to specific, or new, circumstances.

“Congress is so slow to act that there’s a lot of interest in being able to delegate authorities to existing agencies or a new agency, to increase the responsiveness of government” when it comes to AI policy, Colson says. “This [ruling] definitely makes that harder.”

Even if federal AI legislation seems unlikely any time soon, let alone before the 2024 election, recent polling by the AIPI and others suggests that voters aren’t as polarized on AI as they are on other issues facing the nation. Earlier polling by the AIPI found that 75% of Democrats and 80% of Republicans believe that U.S. AI policy should seek to prevent AI from quickly reaching superhuman capabilities. The polls also showed that 83% of Americans believe AI could accidentally cause a catastrophic event, and that 82% prefer slowing down AI development to account for that risk, compared to just 8% who would like to see it accelerated.

❌
❌