Reading view

There are new articles available, click to refresh the page.

The Gap Between Open and Closed AI Models Might Be Shrinking. Here’s Why That Matters

Today’s best AI models, like OpenAI’s ChatGPT and Anthropic’s Claude, come with conditions: their creators control the terms on which they are accessed to prevent them being used in harmful ways. This is in contrast with ‘open’ models, which can be downloaded, modified, and used by anyone for almost any purpose. A new report by non-profit research organization Epoch AI found that open models available today are about a year behind the top closed models.

[time-brightcove not-tgx=”true”]

“The best open model today is on par with closed models in performance, but with a lag of about one year,” says Ben Cottier, lead researcher on the report.

Meta’s Llama 3.1 405B, an open model released in July, took about 16 months to match the capabilities of the first version of GPT-4. If Meta’s next generation AI, Llama 4, is released as an open model, as it is widely expected to be, this gap could shrink even further. The findings come as policymakers grapple with how to deal with increasingly-powerful AI systems, which have already been reshaping information environments ahead of elections across the world, and which some experts worry could one day be capable of engineering pandemics, executing sophisticated cyberattacks, and causing other harms to humans.

Researchers at Epoch AI analyzed hundreds of notable models released since 2018. To arrive at their results, they measured the performance of top models on technical benchmarks—standardized tests that measure an AI’s ability to handle tasks like solving math problems, answering general knowledge questions, and demonstrating logical reasoning. They also looked at how much computing power, or compute, was used to train them, since that has historically been a good proxy for capabilities, though open models can sometimes perform as well as closed models while using less compute, thanks to advancements in the efficiency of AI algorithms. “The lag between open and closed models provides a window for policymakers and AI labs to assess frontier capabilities before they become available in open models,” Epoch researchers write in the report.

Read More: The Researcher Trying to Glimpse the Future of AI

But the distinction between ‘open’ and ‘closed’ AI models is not as simple as it might appear. While Meta describes its Llama models as open-source, it doesn’t meet the new definition published last month by the Open Source Initiative, which has historically set the industry standard for what constitutes open source. The new definition requires companies to share not just the model itself, but also the data and code used to train it. While Meta releases its model “weights”—long lists of numbers that allow users to download and modify the model—it doesn’t release either the training data or the code used to train the models. Before downloading a model, users must agree to an Acceptable Use Policy that prohibits military use and other harmful or illegal activities, although once models are downloaded, these restrictions are difficult to enforce in practice.

Meta says it disagrees with the Open Source Initiative’s new definition. “There is no single open source AI definition, and defining it is a challenge because previous open source definitions do not encompass the complexities of today’s rapidly advancing AI models,” a Meta spokesperson told TIME in an emailed statement. “We make Llama free and openly available, and our license and Acceptable Use Policy help keep people safe by having some restrictions in place. We will continue working with OSI and other industry groups to make AI more accessible and free responsibly, regardless of technical definitions.”

Making AI models open is widely seen to be beneficial because it democratizes access to technology and drives innovation and competition. “One of the key things that open communities do is they get a wider, geographically more-dispersed, and more diverse community involved in AI development,” says Elizabeth Seger, director of digital policy at Demos, a U.K.-based think tank. Open communities, which include academic researchers, independent developers, and non-profit AI labs, also drive innovation through collaboration, particularly in making technical processes more efficient. “They don’t have the same resources to play with as Big Tech companies, so being able to do a lot more with a lot less is really important,” says Seger. In India, for example, “AI that’s built into public service delivery is almost completely built off of open source models,” she says. 

Open models also enable greater transparency and accountability. “There needs to be an open version of any model that becomes basic infrastructure for society, because we do need to know where the problems are coming from,” says Yacine Jernite, machine learning and society lead at Hugging Face, a company that maintains the digital infrastructure where many open models are hosted. He points to the example of Stable Diffusion 2, an open image generation model that allowed researchers and critics to examine its training data and push back against potential biases or copyright infringements—something impossible with closed models like OpenAI’s DALL-E. “You can do that much more easily when you have the receipts and the traces,” he says.

Read More: The Heated Debate Over Who Should Control Access to AI

However, the fact that open models can be used by anyone creates inherent risks, as people with malicious intentions can use them for harm, such as producing child sexual abuse material, or they could even be used by rival states. Last week, Reuters reported that Chinese research institutions linked to the People’s Liberation Army had used an old version of Meta’s Llama model to develop an AI tool for military use, underscoring the fact that, once a model has been publicly released, it cannot be recalled. Chinese companies such as Alibaba have also developed their own open models, which are reportedly competitive with their American counterparts.

On Monday, Meta announced it would make its Llama models available to U.S. government agencies, including those working on defense and national security applications, and to private companies supporting government work, such as Lockeed Martin, Anduril, and Palantir. The company argues that American leadership in open-source AI is both economically advantageous and crucial for global security.

Closed proprietary models present their own challenges. While they are more secure, because access is controlled by their developers, they are also more opaque. Third parties cannot inspect the data on which the models are trained to search for bias, copyrighted material, and other issues. Organizations using AI to process sensitive data may choose to avoid closed models due to privacy concerns. And while these models have stronger guardrails built in to prevent misuse, many people have found ways to ‘jailbreak’ them, effectively circumventing these guardrails.

Governance challenges

At present, the safety of closed models is primarily in the hands of private companies, although government institutions such as the U.S. AI Safety Institute (AISI) are increasingly playing a role in safety-testing models ahead of their release. In August, the U.S. AISI signed formal agreements with Anthropic to enable “formal collaboration on AI safety research, testing and evaluation”.

Because of the lack of centralized control, open models present distinct governance challenges—particularly in relation to the most extreme risks that future AI systems could pose, such as empowering bioterrorists or enhancing cyberattacks. How policymakers should respond depends on whether the capabilities gap between open and closed models is shrinking or widening. “If the gap keeps getting wider, then when we talk about frontier AI safety, we don’t have to worry so much about open ecosystems, because anything we see is going to be happening with closed models first, and those are easier to regulate,” says Seger. “However, if that gap is going to get narrower, then we need to think a lot harder about if and how and when to regulate open model development, which is an entire other can of worms, because there’s no central, regulatable entity.”

For companies such as OpenAI and Anthropic, selling access to their models is central to their business model. “A key difference between Meta and closed model providers is that selling access to AI models isn’t our business model,” Meta CEO Mark Zuckerberg wrote in an open letter in July. “We expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency.”

Measuring the abilities of AI systems is not straightforward. “Capabilities is not a term that’s defined in any way, shape or form, which makes it a terrible thing to discuss without common vocabulary,” says Jernite. “There are many things you can do with open models that you can’t do with closed models,” he says, emphasizing that open models can be adapted to a range of use-cases, and that they may outperform closed models when trained for specific tasks.

Ethan Mollick, a Wharton professor and popular commentator on the technology, argues that even if there was no further progress in AI, it would likely take years before these systems are fully integrated into our world. With new capabilities being added to AI systems at a steady rate—in October, frontier AI lab Anthropic introduced the ability for its model to directly control a computer, still in beta—the complexity of governing this technology will only increase. 

In response, Seger says that it is vital to tease out exactly what risks are at stake. “We need to establish very clear threat models outlining what the harm is and how we expect openness to lead to the realization of that harm, and then figure out the best point along those individual threat models for intervention.”

What Teenagers Really Think About AI

teenagers-ai-risk

American teenagers believe addressing the potential risks of artificial intelligence should be a top priority for lawmakers, according to a new poll that provides the first in-depth look into young people’s concerns about the technology.

The poll, carried out by youth-led advocacy group the Center for Youth and AI and polling organization YouGov, and shared exclusively with TIME, reveals a level of concern that rivals long standing issues like social inequality and climate change.

[time-brightcove not-tgx=”true”]

The poll of 1,017 U.S. teens aged 13 to 18 was carried out in late July and early August, and found that 80% of respondents believed it was “extremely” or “somewhat” important for lawmakers to address the risks posed by AI, falling just below healthcare access and affordability in terms of issues they said were a top priority. That surpassed social inequality (78%) and climate change (77%).

Although the sample size is fairly small, it gives an insight into how young people are thinking about technology, which has often been embedded in their lives from an early age. “I think our generation has a unique perspective,” says Saheb Gulati, 17, who co-founded the Center for Youth and AI with Jason Hausenloy, 19. “That’s not in spite of our age, but specifically because of it.” Because today’s teens have grown up using digital technology, Gulati says, they have confronted questions of its societal impacts more than older generations.

Read More: 5 Steps Parents Should Take to Help Kids Use AI Safely

While there has been more research about how young people are using AI, for example to help or cheat with schoolwork, says Rachel Hanebutt, assistant professor at Georgetown University’s Thrive Center, who helped advise on the polls’ analysis, “Some of those can feel a little superficial and not as focused on what teens and young people think about AI and its role in their future, which I think is where this brings a lot of value.”

The findings show that nearly half of the respondents use ChatGPT or similar tools several times per week, aligning with another recent poll that suggests teens have embraced AI faster than their parents. But being early-adopters hasn’t translated into “full-throated optimism,” Hausenloy says.

Teens are at the heart of many debates over artificial intelligence, from the impact of social media algorithms to deep fake nudes. This week it emerged that a mother is suing Character.ai and Google after her son allegedly became obsessed with the chatbot before committing suicide. Yet, “ages 13 to 18 are not always represented in full political polls,” says Hanebutt. This research gives adults a better understanding of “what teens and young people think about AI and its role in their future,” rather than just how they’re using it, Hanebutt says. She notes the need for future polling that explores how teenagers expect lawmakers to act on the issue.

Read More: Column: How AI-Powered Tech Can Harm Children

While the poll didn’t ask about specific policies, it does offer insight into the AI risks of concern to the greatest number of teens, with immediate threats topping the list. AI-generated misinformation worried the largest proportion of respondents at 59%, closely followed by deepfakes at 58%. However, the poll reveals that many young people are also concerned about the technology’s longer term trajectory, with 47% saying they are concerned about the potential for advanced autonomous AI to escape human control. Nearly two-thirds said they consider the implications of AI when planning their career.

Hausenloy says that the poll is just the first step in the Center for Youth and AI’s ambitions to ensure young people are “represented, prepared and protected” when it comes to AI.

The poll suggests that, despite concerns in other areas, young people are generally supportive of AI-generated creative works. More than half of respondents (57%) were in favor of AI-generated art, film, and music, while only 26% opposed it. Less than a third of teens were concerned about AI copyright violations.

On the question of befriending AI, respondents were divided, with 46% saying AI companionship is acceptable compared with 44% saying it’s unacceptable. On the other hand, most teens (68%) opposed romantic relationships with AI, compared to only 24% who find them acceptable.

Read more: AI-Human Romances Are Flourishing—And This Is Just the Beginning

“This is the first and most comprehensive view on youth attitudes on AI I have ever seen,” says Sneha Revanur, founder and president of Encode Justice, a youth-led, AI-focused civil-society group, which helped advise on the survey questions. Revanur was the youngest participant at a White House roundtable about AI back in July 2023, and more recently the youngest to participate in the 2024 World Economic Forum in Davos.

In the past, she says Encode Justice was speaking on behalf of their generation without hard numbers to back them, but “we’ll be coming into future meetings with policymakers armed with this data, and armed with the fact that we do actually have a fair amount of young people who are thinking about these risks.”

Read more: U.S. Voters Value Safe AI Development Over Racing Against China, Poll Shows

She points to the California Senate Bill 1047—which would have required AI companies to implement safety measures to protect the public from potential harms from their technology—as a case where public concerns about the technology were overlooked. “In California, we just saw Governor Gavin Newsom veto a sweeping AI safety bill that was supported by a broad coalition, including our organization, Anthropic, Elon Musk, actors in Hollywood and labor unions,” Revanur says. “That was the first time that we saw this splintering in the narrative that the public doesn’t care about AI policy. And I think that this poll is actually just one more crack in that narrative.”

Why Sam Altman Is Leaving OpenAI’s Safety Committee

US-TECHNOLOGY-AI-MICROSOFT-COMPUTERS

OpenAI’s CEO Sam Altman is stepping down from the internal committee that the company created to advise its board on “critical safety and security” decisions amid the race to develop ever more powerful artificial intelligence technology.

The committee, formed in May, had been evaluating OpenAI’s processes and safeguards over a 90-day period. OpenAI published the committee’s recommendations following the assessment on Sept. 16. First on the list: establishing independent governance for safety and security.

[time-brightcove not-tgx=”true”]

As such, Altman, who, in addition to serving OpenAI’s board, oversees the company’s business operations in his role as CEO, will no longer serve on the safety committee. In line with the committee’s recommendations, OpenAI says the newly independent committee will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University, who joined OpenAI’s board in August. Other members of the committee will include OpenAI board members Quora co-founder and CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony Entertainment president Nicole Seligman. Along with Altman, OpenAI’s board chair Bret Taylor and several of the company’s technical and policy experts will also step down from the committee.

Read more: The TIME100 Most Influential People in AI 2024

The committee’s other recommendations include enhancing security measures, being transparent about OpenAI’s work, and unifying the company’s safety frameworks. It also said it would explore more opportunities to collaborate with external organizations, like those used to evaluate OpenAI’s recently released series of reasoning models o1 for dangerous capabilities.

The Safety and Security Committee is not OpenAI’s first stab at creating independent oversight. OpenAI’s for-profit arm, created in 2019, is controlled by a non-profit entity with a “majority independent” board, tasked with ensuring it acts in accordance with its mission of developing safe broadly beneficial artificial general intelligence (AGI)—a system that surpasses humans in most regards.

In November, OpenAI’s board fired Altman, saying that he had not been “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” After employees and investors revolted—and board member and company president Greg Brockman resigned—he was swiftly reinstated as CEO, and board members Helen Toner, Tasha McCauley, and Ilya Sutskever resigned. Brockman later returned as president of the company.

Read more: A Timeline of All the Recent Accusations Leveled at OpenAI and Sam Altman

The incident highlighted a key challenge for the rapidly growing company. Critics including Toner and McCauley argue that having a formally independent board isn’t enough of a counterbalance to the strong profit incentives the company faces. Earlier this month, Reuters reported that OpenAI’s ongoing fundraising efforts, which could catapult its valuation to $150 billion, might hinge on changing its corporate structure.

Toner and McCauley say board independence doesn’t go far enough and that governments must play an active role in regulating AI. “Even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable,” the former board members wrote in the Economist in May, reflecting on OpenAI’s November boardroom debacle. 

In the past, Altman has urged regulation of AI systems, but OpenAI also lobbied against California’s AI bill, which would mandate safety protocols for developers. Going against the company’s position, more than 30 current and former OpenAI employees have publicly supported the bill.

The Safety and Security Committee’s establishment in late May followed a particularly tumultuous month for OpenAI. Ilya Sutskever and Jan Leike, the two leaders of the company’s “superalignment” team, which focused on ensuring that if AI systems surpass human-level intelligence, they remain under human control, resigned. Leike accused OpenAI of prioritizing “shiny products” over safety in a post on X. The team was disbanded following their departure. The same month, OpenAI came under fire for asking departing employees to sign agreements that prevented them from criticizing the company or forfeit their vested equity. (OpenAI later said that these provisions had not and would not be enforced and that they would be removed from all exit paperwork going forward).

Exploring Science and Technology Advancements: Transforming Our World

The field of science and technology is continually evolving, driving significant advancements that shape our world and improve our quality of life. These innovations span various domains, including healthcare, communication, transportation, and environmental sustainability. As an expert in Science and Education, I will delve into some of the most impactful recent advancements in science and…

Source

Exclusive: New Research Finds Stark Global Divide in Ownership of Powerful AI Chips

The World AI Conference in Shanghai

When we think of the “cloud,” we often imagine data floating invisibly in the ether. But the reality is far more tangible: the cloud is located in huge buildings called data centers, filled with powerful, energy-hungry computer chips. Those chips, particularly graphics processing units (GPUs), have become a critical piece of infrastructure for the world of AI, as they are required to build and run powerful chatbots like ChatGPT.

[time-brightcove not-tgx=”true”]

As the number of things you can do with AI grows, so does the geopolitical importance of high-end chips—and where they are located in the world. The U.S. and China are competing to amass stockpiles, with Washington enacting sanctions aimed at preventing Beijing from buying the most cutting-edge varieties. But despite the stakes, there is a surprising lack of public data on where exactly the world’s AI chips are located.

A new peer-reviewed paper, shared exclusively with TIME ahead of its publication, aims to fill that gap. “We set out to find: Where is AI?” says Vili Lehdonvirta, the lead author of the paper and a professor at Oxford University’s Internet Institute. Their findings were stark: GPUs are highly concentrated in only 30 countries in the world, with the U.S. and China far out ahead. Much of the world lies in what the authors call “Compute Deserts:” areas where there are no GPUs for hire at all.

The finding has significant implications not only for the next generation of geopolitical competition, but for AI governance—or, which governments have the power to regulate how AI is built and deployed. “If the actual infrastructure that runs the AI, or on which the AI is trained, is on your territory, then you can enforce compliance,” says Lehdonvirta, who is also a professor of technology policy at Aalto University. Countries without jurisdiction over AI infrastructure have fewer legislative choices, he argues, leaving them subjected to a world shaped by others. “This has implications for which countries shape AI development as well as norms around what is good, safe, and beneficial AI,” says Boxi Wu, one of the paper’s authors.

The paper maps the physical locations of “public cloud GPU compute”—essentially, GPU clusters that are accessible for hire via the cloud businesses of major tech companies. But the research has some big limitations: it doesn’t count GPUs that are held by governments, for example, or in the private hands of tech companies for their use alone. And it doesn’t factor in non-GPU varieties of chips that are increasingly being used to train and run advanced AI. Lastly, it doesn’t count individual chips, but rather the number of compute “regions” (or groups of data centers containing those chips) that cloud businesses make available in each country.

Read More: How ‘Friendshoring’ Made Southeast Asia Pivotal to the AI Revolution

That’s not for want of trying. “GPU quantities and especially how they are distributed across [cloud] providers’ regions,” the paper notes, “are treated as highly confidential information.” Even with the paper’s limitations, its authors argue, the research is the closest up-to-date public estimate of where in the world the most advanced AI chips are located—and a good proxy for the elusive bigger picture.

The paper finds that the U.S. and China have by far the most public GPU clusters in the world. China leads the U.S. on the number of GPU-enabled regions overall, however the most advanced GPUs are highly concentrated in the United States. The U.S. has eight “regions” where H100 GPUs—the kind that are the subject of U.S. government sanctions on China—are available to hire. China has none. This does not mean that China has no H100s; it only means that cloud companies say they do not have any H100 GPUs located in China. There is a burgeoning black market in China for the restricted chips, the New York Times reported in August, citing intelligence officials and vendors who said that many millions of dollars worth of chips had been smuggled into China despite the sanctions.

The paper’s authors argue that the world can be divided into three categories: “Compute North,” where the most advanced chips are located; the “Compute South,” which has some older chips suited for running, but not training, AI systems; and “Compute Deserts,” where no chips are available for hire at all. The terms—which overlap to an extent with the fuzzy “Global North” and “Global South” concepts used by some development economists—are just an analogy intended to draw attention to the “global divisions” in AI compute, Lehdonvirta says. 

The risk of chips being so concentrated in rich economies, says Wu, is that countries in the global south may become reliant on AIs developed in the global north without having a say in how they work. 

It “mirrors existing patterns of global inequalities across the so-called Global North and South,” Wu says, and threatens to “entrench the economic, political and technological power of Compute North countries, with implications for Compute South countries’ agency in shaping AI research and development.”

Exclusive: Workers at Google DeepMind Push Company to Drop Military Contracts

DeepMind Technologies Ltd. Chief Executive Officer Demis Hassabis Interview

Nearly 200 workers inside Google DeepMind, the company’s AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google’s own AI rules.

[time-brightcove not-tgx=”true”]

The letter is a sign of a growing dispute within Google between at least some workers in its AI division—which has pledged to never work on military technology—and its Cloud business, which has contracts to sell Google services, including AI developed inside DeepMind, to several governments and militaries including those of Israel and the United States. The signatures represent some 5% of DeepMind’s overall headcount—a small portion to be sure, but a significant level of worker unease for an industry where top machine learning talent is in high demand.

The DeepMind letter, dated May 16 of this year, begins by stating that workers are “concerned by recent reports of Google’s contracts with military organizations.” It does not refer to any specific militaries by name—saying “we emphasize that this letter is not about the geopolitics of any particular conflict.” But it links out to an April report in TIME which revealed that Google has a direct contract to supply cloud computing and AI services to the Israeli Military Defense, under a wider contract with Israel called Project Nimbus. The letter also links to other stories alleging that the Israeli military uses AI to carry out mass surveillance and target selection for its bombing campaign in Gaza, and that Israeli weapons firms are required by the government to buy cloud services from Google and Amazon.

Read More: Exclusive: Google Contract Shows Deal With Israel Defense Ministry

“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter that circulated inside Google DeepMind says. (Those principles state the company will not pursue applications of AI that are likely to cause “overall harm,” contribute to weapons or other technologies whose “principal purpose or implementation” is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.”) The letter says its signatories are concerned with “ensuring that Google’s AI Principles are upheld,” and adds: “We believe [DeepMind’s] leadership shares our concerns.”

A Google spokesperson told TIME: “When developing AI technologies and making them available to customers, we comply with our AI Principles, which outline our commitment to developing technology responsibly. We have been very clear that the Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy. This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.”

The letter calls on DeepMind’s leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter’s circulation, Google has done none of those things, according to four people with knowledge of the matter. “We have received no meaningful response from leadership,” one said, “and we are growing increasingly frustrated.”

Student encampment at UC Berkeley to demand end to Gaza war, divestment from Israel

When DeepMind was acquired by Google in 2014, the lab’s leaders extracted a major promise from the search giant: that their AI technology would never be used for military or surveillance purposes. For many years the London-based lab operated with a high degree of independence from Google’s California headquarters. But as the AI race heated up, DeepMind was drawn more tightly into Google proper. A bid by the lab’s leaders in 2021 to secure more autonomy failed, and in 2023 it merged with Google’s other AI team—Google Brain—bringing it closer to the heart of the tech giant. An independent ethics board that DeepMind leaders hoped would govern the uses of the AI lab’s technology ultimately met only once, and was soon replaced by an umbrella Google ethics policy: the AI Principles. While those principles promise that Google will not develop AI that is likely to cause “overall harm,” they explicitly allow the company to develop technologies that may cause harm if it concludes “that the benefits substantially outweigh the risks.” And they do not rule out selling Google’s AI to military clients.

As a result, DeepMind technology has been bundled into Google’s Cloud software and sold to militaries and governments, including Israel and its Ministry of Defense. “While DeepMind may have been unhappy to work on military AI or defense contracts in the past, I do think this isn’t really our decision any more,” one DeepMind employee told TIME in April, asking not to be named because they were not authorized to speak publicly. Several Google workers told TIME in April that for privacy reasons, the company has limited insights into government customers’ use of its infrastructure, meaning that it may be difficult, if not impossible, for Google to check if its acceptable use policy—which forbids users from using its products to engage in “violence that can cause death, serious harm, or injury”—is being broken.

Read More: Google Workers Revolt Over $1.2 Billion Contract With Israel

Google says that Project Nimbus, its contract with Israel, is not “directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” But that response “does not deny the allegations that its technology enables any form of violence or enables surveillance violating internationally accepted norms,” according to the letter that circulated within DeepMind in May. Google’s statement on Project Nimbus “is so specifically unspecific that we are all none the wiser on what it actually means,” one of the letter’s signatories told TIME. 

More from TIME

[video id=A8hZ67ye]

At a DeepMind town hall event in June, executives were asked to respond to the letter, according to three people with knowledge of the matter. DeepMind’s chief operating officer Lila Ibrahim answered the question. She told employees that DeepMind would not design or deploy any AI applications for weaponry or mass surveillance, and that Google Cloud customers were legally bound by the company’s terms of service and acceptable use policy, according to a set of notes taken during the meeting that were reviewed by TIME. Ibrahim added that she was proud of Google’s track record of advancing safe and responsible AI, and that it was the reason she chose to join, and stay at, the company.

How Will.i.am Is Trying to Reinvent Radio With AI

Will.i.am

Will.i.am has been embracing innovative technology for years. Now he is using artificial intelligence in an effort to transform how we listen to the radio.

The musician, entrepreneur and tech investor has launched RAiDiO.FYI, a set of interactive radio stations themed around topics like sport, pop culture, and politics. Each station is fundamentally interactive: tune in and you’ll be welcomed by name by an AI host “live from the ether,” the Black Eyed Peas frontman tells TIME. Hosts talk about their given topic before playing some music. Unlike previous AI-driven musical products, such as Spotify’s AI DJ, RAiDiO.FYI permits two-way communication: At any point you can press a button to speak with the AI persona about whatever comes to mind.

[time-brightcove not-tgx=”true”]

The stations can be accessed on FYI—which stands for Focus Your Ideas—a communication and collaboration app created by FYI.AI, founded by will.i.am in 2020. Each station exists as a “project” within the app. All the relevant content, including the AI host’s script, music, and segments, are loaded in as a “mega prompt” from which the tool—powered by third-party large language models—can draw. AI personas also have limited web browsing capabilities and can pull information from trusted news sources.

“This is Act One”, will.i.am told TIME while demonstrating RAiDiO.FYI on Aug. 20, National Radio Day in the U.S. While most of the nine currently-available stations have been created by the FYI.AI team, “Act Two,” he says, involves partnerships with creators across the entertainment and media industries.

One such partnership has already been struck with the media platform Earn Your Leisure, to promote the organization’s upcoming “Invest Fest” conference in Atlanta. At the event, a “hyper-curated” station is intended to replace the traditional pamphlet or email that might contain all relevant conference information, like speaker profiles and the event’s lineup. Instead, will.i.am explains, the conference organizers can feed all those details—as well as further information, such as the content of speeches—to the AI station, where it can be interacted with directly.

Will.i.am envisions this idea of an interactive text-to-station applying broadly, beyond just radio and conferencing. “It could be learning for tutors and teachers. It could be books for authors. It could be podcast segments for podcasters. It can be whatever it is the owner of that project [wants] when we partner with them to create that station,” he says, emphasizing that the platform creates fresh possibilities for how people engage with content. “It’s liberating for the content maker, because if you had your sponsorships and your advertiser partners, you’re the one who’s deciding who’s on your broadcast.”

This is not will.i.am’s first foray into the world of AI. The artist, who has been using technology to make music for decades, started thinking seriously about the subject in 2004, when he was introduced to its possibilities by pioneering professor and AI expert Patrick Winston. In January, he launched a radio show on SiriusXM that he co-hosts with an AI called Qd.pi.

He also sits on the World Economic Forum’s Fourth Industrial Revolution Advisory Committee and regularly attends the organization’s annual meetings in Davos to discuss how technology shapes society. He previously served as chip manufacturer Intel’s director of creative innovation, and in 2009 launched the i.am Angel Foundation to support young people studying computer science and robotics in the Los Angeles neighborhood where he was raised.

[video id=5jEFRlMm autostart="viewable"]

The Black Eyed Peas’ 2010 music video for the song “Imma Be Rocking That Body” begins with a skit where will.i.am shows off futuristic technology that can replicate any artist’s voice, to his bandmates’ dismay. That technology is now possible today. FYI’s AI personas may still have the distinctive sound of an AI voice and—like most large language models—have the potential to be influenced by malicious prompting, yet they offer a glimpse into the future that is already here. And it won’t be long before it is not just the station hosts, but the music itself that is AI-generated, will.i.am says.

Harnessing the Power of Educational Technology

Educational technology has revolutionized the way we learn and teach, providing innovative tools and resources that enhance the educational experience. From interactive apps to virtual reality, educational technology is transforming traditional classrooms and making learning more accessible and engaging. As an expert in Technology and Gadgets, I will explore the various facets of educational…

Source

Mark Zuckerberg Just Intensified the Battle for AI’s Future

Meta CEO Mark Zuckerberg

The tech industry is currently embroiled in a heated debate over the future of AI: should powerful systems be open-source and freely accessible, or closed and tightly monitored for dangers?

On Tuesday, Meta CEO Mark Zuckerberg fired a salvo into this ongoing battle, publishing not just a new series of powerful AI models, but also a manifesto forcefully advocating for the open-source approach. The document, which was widely praised by venture capitalists and tech leaders like Elon Musk and Jack Dorsey, serves as both a philosophical treatise and a rallying cry for proponents of open-source AI development. It arrives as intensifying global efforts to regulate AI have galvanized resistance from open-source advocates, who see some of those potential laws as threats to innovation and accessibility.

[time-brightcove not-tgx=”true”]

At the heart of Meta’s announcement on Tuesday was the release of its latest generation of Llama large language models, the company’s answer to ChatGPT. The biggest of these new models, Meta claims, is the first open-source large language model to reach the so-called “frontier” of AI capabilities.

Meta has taken on a very different strategy with AI compared to its competitors OpenAI, Google DeepMind and Anthropic. Those companies sell access to their AIs through web browsers or interfaces known as APIs, a strategy that allows them to protect their intellectual property, monitor the use of their models, and bar bad actors from using them. By contrast, Meta has chosen to open-source the “weights,” or the underlying neural networks, of its Llama models—meaning they can be freely downloaded by anybody and run on their own machines. That strategy has put Meta’s competitors under financial pressure, and has won it many fans in the software world. But Meta’s strategy has also been criticized by many in the field of AI safety, who warn that open-sourcing powerful AI models has already led to societal harms like deepfakes, and could in future open a Pandora’s box of worse dangers.

In his manifesto, Zuckerberg argues most of those concerns are unfounded and frames Meta’s strategy as a democratizing force in AI development. “Open-source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” he writes. “It will make the world more prosperous and safer.” 

But while Zuckerberg’s letter presents Meta as on the side of progress, it is also a deft political move. Recent polling suggests that the American public would welcome laws that restrict the development of potentially-dangerous AI, even if it means hampering some innovation. And several pieces of AI legislation around the world, including the SB1047 bill in California, and the ENFORCE Act in Washington, D.C., would place limits on the kinds of systems that companies like Meta can open-source, due to safety concerns. Many of the venture capitalists and tech CEOs who celebrated Zuckerberg’s letter after its publication have in recent weeks mounted a growing campaign to shape public opinion against regulations that would constrain open-source AI releases. “This letter is part of a broader trend of some Silicon Valley CEOs and venture capitalists refusing to take responsibility for damages their AI technology may cause,” says Andrea Miotti, the executive director of AI safety group Control AI. “Including catastrophic outcomes.”


The philosophical underpinnings for Zuckerberg’s commitment to open-source, he writes, stem from his company’s long struggle against Apple, which via its iPhone operating system constrains what Meta can build, and which via its App Store takes a cut of Meta’s revenue. He argues that building an open ecosystem—in which Meta’s models become the industry standard due to their customizability and lack of constraints—will benefit both Meta and those who rely on its models, harming only rent-seeking companies who aim to lock in users. (Critics point out, however, that the Llama models, while more accessible than their competitors, still come with usage restrictions that fall short of true open-source principles.) Zuckerberg also argues that closed AI providers have a business model that relies on selling access to their systems—and suggests that their concerns about the dangers of open-source, including lobbying governments against it, may stem from this conflict of interest.

Addressing worries about safety, Zuckerberg writes that open-source AI will be better at addressing “unintentional” types of harm than the closed alternative, due to the nature of transparent systems being more open to scrutiny and improvement. “Historically, open-source software has been more secure for this reason,” he writes. As for intentional harm, like misuse by bad actors, Zuckerberg argues that “large-scale actors” with high compute resources, like companies and governments, will be able to use their own AI to police “less sophisticated actors” misusing open-source systems. “As long as everyone has access to similar generations of models—which open-source promotes—then governments and institutions with more compute resources will be able to check bad actors with less compute,” he writes.

But “not all ‘large actors’ are benevolent,” says Hamza Tariq Chaudhry, a U.S. policy specialist at the Future of Life Institute, a nonprofit focused on AI risk. “The most authoritarian states will likely repurpose models like Llama to perpetuate their power and commit injustices.” Chaudhry, who is originally from Pakistan, adds: “Coming from the Global South, I am acutely aware that AI-powered cyberattacks, disinformation campaigns and other harms pose a much greater danger to countries with nascent institutions and severe resource constraints, far away from Silicon Valley.”

Zuckerberg’s argument also doesn’t address a central worry held by many people concerned with AI safety: the risk that AI could create an “offense-defense asymmetry,” or in other words strengthen attackers while doing little to strengthen defenders. “Zuckerberg’s statements showcase a concerning disregard for basic security in Meta’s approach to AI,” says Miotti, the director of Control AI. “When dealing with catastrophic dangers, it’s a simple fact that offense needs only to get lucky once, but defense needs to get lucky every time. A virus can spread and kill in days, while deploying a treatment can take years.”

Later in his letter, Zuckerberg addresses other worries that open-source AI will allow China to gain access to the most powerful AI models, potentially harming U.S. national security interests. He says he believes that closing off models “will not work and will only disadvantage the U.S. and its allies.” China is good at espionage, he argues, adding that “most tech companies are far from” the level of security that would prevent China from being able to steal advanced AI model weights. “It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities,” he writes. “Plus, constraining American innovation to closed development increases the chance that we don’t lead at all.”

Miotti is unimpressed by the argument. “Zuckerberg admits that advanced AI technology is easily stolen by hostile actors,” he says, “but his solution is to just give it to them for free.”

Could AIs become conscious? Right now, we have no way to tell.

Could AIs become conscious? Right now, we have no way to tell.

Enlarge (credit: BlackJack3D/Getty Images)

Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.

In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.

Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?

Read 50 remaining paragraphs | Comments

Exclusive: U.S. Voters Value Safe AI Development Over Racing Against China, Poll Shows

President Biden Delivers Remarks On His Administration's Efforts To Safeguard The Development Of Artificial Intelligence

A large majority of American voters are skeptical of the argument that the U.S. should race ahead to build ever more powerful artificial intelligence, unconstrained by domestic regulations, in an effort to compete with China, according to new polling shared exclusively with TIME.

[time-brightcove not-tgx=”true”]

The findings indicate that American voters disagree with a common narrative levied by the tech industry, in which CEOs and lobbyists have repeatedly argued the U.S. must tread carefully with AI regulation in order to not hand the advantage to their geopolitical rival. And they reveal a startling level of bipartisan consensus on AI policy, with both Republicans and Democrats in support of the government placing some limits on AI development in favor of safety and national security.

According to the poll, 75% of Democrats and 75% of Republicans believe that “taking a careful controlled approach” to AI—by preventing the release of tools that terrorists and foreign adversaries could use against the U.S.—is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.” A majority of voters support more stringent security practices at AI companies, and are worried about the risk of China stealing their most powerful models, the poll shows. 

The poll was carried out in late June by the AI Policy Institute (AIPI), a U.S. nonprofit that advocates for “a more cautious path” in AI development. The findings show that 50% of voters believe the U.S. should use its advantage in the AI race to prevent any country from building a powerful AI system, by enforcing “safety restrictions and aggressive testing requirements.” That’s compared to just 23% who believe the U.S. should try to build powerful AI as fast as possible to outpace China and achieve a decisive advantage over Beijing.

The polling also suggests that voters may be broadly skeptical of “open-source” AI, or the view that tech companies should be allowed to release the source code of their powerful AI models. Some technologists argue that open-source AI encourages innovation and reduces the monopoly power of the biggest tech companies. But others say it is a recipe for danger as AI systems grow more powerful and unpredictable. 

“What I perceive from the polling is that stopping AI development is not seen as an option,” says Daniel Colson, the executive director of the AIPI. “But giving industry free rein is also seen as risky. And so there’s the desire for some third way. And when we present that in the polling—that third path, mitigated AI development with guardrails—is the one that people overwhelmingly want.” 

The survey also shows that 63% of American voters think it should be illegal to export powerful AI models to potential U.S. adversaries like China, including 73% of Republicans and 59% of Democrats. Just 14% of voters disagree. 

A sample of 1,040 Americans was interviewed for the survey, which was representative by education levels, gender, race, and the political parties for whom respondents cast their votes in the 2020 presidential election. The margin of error given for the results is 3.4% in both directions.

So far there has been no comprehensive AI regulation in the U.S., with the White House encouraging different government agencies to regulate the technology themselves where it falls under their existing remits. That strategy appears to have been put in jeopardy, however, by a recent Supreme Court ruling that limits the ability of federal agencies to apply broad-brushstroke rules set by Congress to specific, or new, circumstances.

“Congress is so slow to act that there’s a lot of interest in being able to delegate authorities to existing agencies or a new agency, to increase the responsiveness of government” when it comes to AI policy, Colson says. “This [ruling] definitely makes that harder.”

Even if federal AI legislation seems unlikely any time soon, let alone before the 2024 election, recent polling by the AIPI and others suggests that voters aren’t as polarized on AI as they are on other issues facing the nation. Earlier polling by the AIPI found that 75% of Democrats and 80% of Republicans believe that U.S. AI policy should seek to prevent AI from quickly reaching superhuman capabilities. The polls also showed that 83% of Americans believe AI could accidentally cause a catastrophic event, and that 82% prefer slowing down AI development to account for that risk, compared to just 8% who would like to see it accelerated.

❌