Normal view

There are new articles available, click to refresh the page.
Today — 21 November 2024Technology

U.S. Gathers Global Group to Tackle AI Safety Amid Growing National Security Concerns

21 November 2024 at 06:00
Gina Raimondo

“AI is a technology like no other in human history,” U.S. Commerce Secretary Gina Raimondo said on Wednesday in San Francisco. “Advancing AI is the right thing to do, but advancing as quickly as possible, just because we can, without thinking of the consequences, isn’t the smart thing to do.”

Raimondo’s remarks came during the inaugural convening of the International Network of AI Safety Institutes, a network of artificial intelligence safety institutes (AISIs) from 9 nations as well as the European Commission brought together by the U.S. Departments of Commerce and State. The event gathered technical experts from government, industry, academia, and civil society to discuss how to manage the risks posed by increasingly-capable AI systems.

[time-brightcove not-tgx=”true”]

Raimondo suggested participants keep two principles in mind: “We can’t release models that are going to endanger people,” she said. “Second, let’s make sure AI is serving people, not the other way around.”

Read More: How Commerce Secretary Gina Raimondo Became America’s Point Woman on AI

The convening marks a significant step forward in international collaboration on AI governance. The first AISIs emerged last November during the inaugural AI Safety Summit hosted by the UK. Both the U.K. and the U.S. governments announced the formation of their respective AISIs as a means of giving their governments the technical capacity to evaluate the safety of cutting-edge AI models. Other countries followed suit; by May, at another AI Summit in Seoul, Raimondo had announced the creation of the network.

In a joint statement, the members of the International Network of AI Safety Institutes—which includes AISIs from the U.S., U.K., Australia, Canada, France, Japan, Kenya, South Korea, and Singapore—laid out their mission: “to be a forum that brings together technical expertise from around the world,” “…to facilitate a common technical understanding of AI safety risks and mitigations based upon the work of our institutes and of the broader scientific community,” and “…to encourage a general understanding of and approach to AI safety globally, that will enable the benefits of AI innovation to be shared amongst countries at all stages of development.”

In the lead-up to the convening, the U.S. AISI, which serves as the network’s inaugural chair, also announced a new government taskforce focused on the technology’s national security risks. The Testing Risks of AI for National Security (TRAINS) Taskforce brings together representatives from the Departments of Defense, Energy, Homeland Security, and Health and Human Services. It will be chaired by the U.S. AISI, and aim to “identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology,” with a particular focus on radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, and conventional military capabilities.

The push for international cooperation comes at a time of increasing tension around AI development between the U.S. and China, whose absence from the network is notable. In remarks pre-recorded for the convening, Senate Majority Leader Chuck Schumer emphasized the importance of ensuring that the Chinese Community Party does not get to “write the rules of the road.” Earlier Wednesday, Chinese lab Deepseek announced a new “reasoning” model thought to be the first to rival OpenAI’s own reasoning model, o1, which the company says is “designed to spend more time thinking” before it responds.

On Tuesday, the U.S.-China Economic and Security Review Commission, which has provided annual recommendations to Congress since 2000, recommended that Congress establish and fund a “Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability,” which the commission defined as “systems as good as or better than human capabilities across all cognitive domains” that “would surpass the sharpest human minds at every task.”

Many experts in the field, such as Geoffrey Hinton, who earlier this year won a Nobel Prize in physics for his work on artificial intelligence, have expressed concerns that, should AGI be developed, humanity may not be able to control it, which could lead to catastrophic harm. In a panel discussion at Wednesday’s event, Anthropic CEO Dario Amodei—who believes AGI-like systems could arrive as soon as 2026—cited “loss of control” risks as a serious concern, alongside the risks that future, more capable models are misused by malicious actors to perpetrate bioterrorism or undermine cybersecurity. Responding to a question, Amodei expressed unequivocal support for making the testing of advanced AI systems mandatory, noting “we also need to be really careful about how we do it.”

Meanwhile, practical international collaboration on AI safety is advancing. Earlier in the week, the U.S. and U.K. AISIs shared preliminary findings from their pre-deployment evaluation of an advanced AI model—the upgraded version of Anthropic’s Claude 3.5 Sonnet. The evaluation focused on assessing the model’s biological and cyber capabilities, as well as its performance on software and development tasks, and the efficacy of the safeguards built into it to prevent the model from responding to harmful requests. Both the U.K. and U.S. AISIs found that these safeguards could be “routinely circumvented,” which they noted is “consistent with prior research on the vulnerability of other AI systems’ safeguards.”

The San Francisco convening set out three priority topics that stand to “urgently benefit from international collaboration”: managing risks from synthetic content, testing foundation models, and conducting risk assessments for advanced AI systems. Ahead of the convening, $11 million of funding was announced to support research into how best to mitigate risks from synthetic content (such as the generation and distribution of child sexual abuse material, and the facilitation of fraud and impersonation). The funding was provided by a mix of government agencies and philanthropic organizations, including the Republic of Korea and the Knight Foundation.

While it is unclear how the election victory of Donald Trump will impact the future of the U.S. AISI and American AI policy more broadly, international collaboration on the topic of AI safety is set to continue. The U.K. AISI is hosting another San Francisco-based conference this week, in partnership with the Centre for the Governance of AI, “to accelerate the design and implementation of frontier AI safety frameworks.” And in February, France will host its “AI Action Summit,” following the Summits held in Seoul in May and in the U.K. last November. The 2025 AI Action Summit will gather leaders from the public and private sectors, academia, and civil society, as actors across the world seek to find ways to govern the technology as its capabilities accelerate.

Raimondo on Wednesday emphasized the importance of integrating safety with innovation when it comes to something as rapidly advancing and as powerful as AI. “It has the potential to replace the human mind,” she said. “Safety is good for innovation. Safety breeds trust. Trust speeds adoption. Adoption leads to more innovation. We need that virtuous cycle.”

U.S. Antitrust Regulators Seek to Break Up Google, Force Sale of Chrome Browser

21 November 2024 at 05:05
The Google Chrome logo on a laptop arranged in the Queens borough of New York, U.S. on Nov. 19, 2024

U.S. regulators want a federal judge to break up Google to prevent the company from continuing to squash competition through its dominant search engine after a court found it had maintained an abusive monopoly over the past decade.

The proposed breakup floated in a 23-page document filed late Wednesday by the U.S. Department of Justice calls for sweeping punishments that would include a sale of Google’s industry-leading Chrome web browser and impose restrictions to prevent Android from favoring its own search engine.

[time-brightcove not-tgx=”true”]

A sale of Chrome “will permanently stop Google’s control of this critical search access point and allow rival search engines the ability to access the browser that for many users is a gateway to the internet,” Justice Department lawyers argued in their filing.

Although regulators stopped short of demanding Google sell Android too, they asserted the judge should make it clear the company could still be required to divest its smartphone operating system if its oversight committee continues to see evidence of misconduct.

The broad scope of the recommended penalties underscores how severely regulators operating under President Joe Biden’s administration believe Google should be punished following an August ruling by U.S. District Judge Amit Mehta that branded the company as a monopolist.

The Justice Department decision-makers who will inherit the case after President-elect Donald Trump takes office next year might not be as strident. The Washington, D.C. court hearings on Google’s punishment are scheduled to begin in April and Mehta is aiming to issue his final decision before Labor Day.

If Mehta embraces the government’s recommendations, Google would be forced to sell its 16-year-old Chrome browser within six months of the final ruling. But the company certainly would appeal any punishment, potentially prolonging a legal tussle that has dragged on for more than four years.

Google didn’t have an immediate comment about the filing, but has previously asserted the Justice Department is pushing penalties that extend far beyond the issues addressed in its case.

Besides seeking a Chrome spinoff and a corralling of the Android software, the Justice Department wants the judge to ban Google from forging multibillion-dollar deals to lock in its dominant search engine as the default option on Apple’s iPhone and other devices. It would also ban Google from favoring its own services, such as YouTube or its recently-launched artificial intelligence platform, Gemini.

Regulators also want Google to license the search index data it collects from people’s queries to its rivals, giving them a better chance at competing with the tech giant. On the commercial side of its search engine, Google would be required to provide more transparency into how it sets the prices that advertisers pay to be listed near the top of some targeted search results.

Wary of Google’s increasing use of artificial intelligence in its search results, regulators also advised Mehta to ensure websites will be able to shield their content from Google’s AI training techniques.

The measures, if they are ordered, threaten to upend a business expected to generate more than $300 billion in revenue this year.

“The playing field is not level because of Google’s conduct, and Google’s quality reflects the ill-gotten gains of an advantage illegally acquired,” the Justice Department asserted in its recommendations. “The remedy must close this gap and deprive Google of these advantages.”

It’s still possible that the Justice Department could ease off attempts to break up Google, especially if Trump takes the widely expected step of replacing Assistant Attorney General Jonathan Kanter, who was appointed by Biden to oversee the agency’s antitrust division.

Read More: How a Second Trump Administration Will Change the Domestic and World Order

Although the case targeting Google was originally filed during the final months of Trump’s first term in office, Kanter oversaw the high-profile trial that culminated in Mehta’s ruling against Google. Working in tandem with Federal Trade Commission Chair Lina Khan, Kanter took a get-tough stance against Big Tech that triggered other attempted crackdowns on industry powerhouses such as Apple and discouraged many business deals from getting done during the past four years.

Trump recently expressed concerns that a breakup might destroy Google but didn’t elaborate on alternative penalties he might have in mind. “What you can do without breaking it up is make sure it’s more fair,” Trump said last month. Matt Gaetz, the former Republican congressman that Trump nominated to be the next U.S. Attorney General, has previously called for the breakup of Big Tech companies.

Gaetz faces a tough confirmation hearing.

Read More: Here Are the New Members of Trump’s Administration So Far

This latest filing gave Kanter and his team a final chance to spell out measures that they believe are needed to restore competition in search. It comes six weeks after Justice first floated the idea of a breakup in a preliminary outline of potential penalties.

But Kanter’s proposal is already raising questions about whether regulators seek to impose controls that extend beyond the issues covered in last year’s trial, and—by extension—Mehta’s ruling.

Banning the default search deals that Google now pays more than $26 billion annually to maintain was one of the main practices that troubled Mehta in his ruling.

It’s less clear whether the judge will embrace the Justice Department’s contention that Chrome needs to be spun out of Google and or Android should be completely walled off from its search engine.

“It is probably going a little beyond,” Syracuse University law professor Shubha Ghosh said of the Chrome breakup. “The remedies should match the harm, it should match the transgression. This does seem a little beyond that pale.”

Google rival DuckDuckGo, whose executives testified during last year’s trial, asserted the Justice Department is simply doing what needs to be done to rein in a brazen monopolist.

“Undoing Google’s overlapping and widespread illegal conduct over more than a decade requires more than contract restrictions: it requires a range of remedies to create enduring competition,” Kamyl Bazbaz, DuckDuckGo’s senior vice president of public affairs, said in a statement.

Trying to break up Google harks back to a similar punishment initially imposed on Microsoft a quarter century ago following another major antitrust trial that culminated in a federal judge deciding the software maker had illegally used his Windows operating system for PCs to stifle competition.

However, an appeals court overturned an order that would have broken up Microsoft, a precedent many experts believe will make Mehta reluctant to go down a similar road with the Google case.

Yesterday — 20 November 2024Technology
Before yesterdayTechnology

There Is a Solution to AI’s Existential Risk Problem

15 November 2024 at 12:11
AGI Artificial General Intelligence concept image

Technological progress can excite us, politics can infuriate us, and wars can mobilize us. But faced with the risk of human extinction that the rise of artificial intelligence is causing, we have remained surprisingly passive. In part, perhaps this was because there did not seem to be a solution. This is an idea I would like to challenge.

AI’s capabilities are ever-improving. Since the release of ChatGPT two years ago, hundreds of billions of dollars have poured into AI. These combined efforts will likely lead to Artificial General Intelligence (AGI), where machines have human-like cognition, perhaps within just a few years.

[time-brightcove not-tgx=”true”]

Hundreds of AI scientists think we might lose control over AI once it gets too capable, which could result in human extinction. So what can we do?

Read More: What Donald Trump’s Win Means For AI

The existential risk of AI has often been presented as extremely complex. A 2018 paper, for example, called the development of safe human-level AI a “super wicked problem.” This perceived difficulty had much to do with the proposed solution of AI alignment, which entails making superhuman AI act according to humanity’s values. AI alignment, however, was a problematic solution from the start.

First, scientific progress in alignment has been much slower than progress in AI itself. Second, the philosophical question of which values to align a superintelligence to is incredibly fraught. Third, it is not at all obvious that alignment, even if successful, would be a solution to AI’s existential risk. Having one friendly AI does not necessarily stop other unfriendly ones.

Because of these issues, many have urged technology companies not to build any AI that humanity could lose control over. Some have gone further; activist groups such as PauseAI have indeed proposed an international treaty that would pause development globally.

That is not seen as politically palatable by many, since it may still take a long time before the missing pieces to AGI are filled in. And do we have to pause already, when this technology can also do a lot of good? Yann Lecun, AI chief at Meta and prominent existential risk skeptic, says that the existential risk debate is like “worrying about turbojet safety in 1920.”

On the other hand, technology can leapfrog. If we get another breakthrough such as the transformer, a 2017 innovation which helped launch modern Large Language Models, perhaps we could reach AGI in a few months’ training time. That’s why a regulatory framework needs to be in place before then.

Fortunately, Nobel Laureate Geoffrey Hinton, Turing Award winner Yoshua Bengio, and many others have provided a piece of the solution. In a policy paper published in Science earlier this year, they recommended “if-then commitments”: commitments to be activated if and when red-line capabilities are found in frontier AI systems.

Building upon their work, we at the nonprofit Existential Risk Observatory propose a Conditional AI Safety Treaty. Signatory countries of this treaty, which should include at least the U.S. and China, would agree that once we get too close to loss of control they will halt any potentially unsafe training within their borders. Once the most powerful nations have signed this treaty, it is in their interest to verify each others’ compliance, and to make sure uncontrollable AI is not built elsewhere, either.

One outstanding question is at what point AI capabilities are too close to loss of control. We propose to delegate this question to the AI Safety Institutes set up in the U.K., U.S., China, and other countries. They have specialized model evaluation know-how, which can be developed further to answer this crucial question. Also, these institutes are public, making them independent from the mostly private AI development labs. The question of how close is too close to losing control will remain difficult, but someone will need to answer it, and the AI Safety Institutes are best positioned to do so.

We can mostly still get the benefits of AI under the Conditional AI Safety Treaty. All current AI is far below loss of control level, and will therefore be unaffected. Narrow AIs in the future that are suitable for a single task—such as climate modeling or finding new medicines—will be unaffected as well. Even more general AIs can still be developed, if labs can demonstrate to a regulator that their model has loss of control risk less than, say, 0.002% per year (the safety threshold we accept for nuclear reactors). Other AI thinkers, such as MIT professor Max Tegmark, Conjecture CEO Connor Leahy, and ControlAI director Andrea Miotti, are thinking in similar directions.

Fortunately, the existential risks posed by AI are recognized by many close to President-elect Donald Trump. His daughter Ivanka seems to see the urgency of the problem. Elon Musk, a critical Trump backer, has been outspoken about the civilizational risks for many years, and recently supported California’s legislative push to safety-test AI. Even the right-wing Tucker Carlson provided common-sense commentary when he said: “So I don’t know why we’re sitting back and allowing this to happen, if we really believe it will extinguish the human race or enslave the human race. Like, how can that be good?” For his part, Trump has expressed concern about the risks posed by AI, too.

The Conditional AI Safety Treaty could provide a solution to AI’s existential risk, while not unnecessarily obstructing AI development right now. Getting China and other countries to accept and enforce the treaty will no doubt be a major geopolitical challenge, but perhaps a Trump government is exactly what is needed to overcome it.

A solution to one of the toughest problems we face—the existential risk of AI—does exist. It is up to us whether we make it happen, or continue to go down the path toward possible human extinction.

❌
❌