Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

How AI Is Being Used to Respond to Natural Disasters in Cities

4 November 2024 at 16:01
TURKEY-SYRIA-QUAKE

The number of people living in urban areas has tripled in the last 50 years, meaning when a major natural disaster such as an earthquake strikes a city, more lives are in danger. Meanwhile, the strength and frequency of extreme weather events has increased—a trend set to continue as the climate warms. That is spurring efforts around the world to develop a new generation of earthquake monitoring and climate forecasting systems to make detecting and responding to disasters quicker, cheaper, and more accurate than ever.

[time-brightcove not-tgx=”true”]

On Nov. 6, at the Barcelona Supercomputing Center​ in Spain, the Global Initiative on Resilience to Natural Hazards through AI Solutions will meet for the first time. The new United Nations initiative aims to guide governments, organizations, and communities in using AI for disaster management.

The initiative builds on nearly four years of groundwork laid by the International Telecommunications Union, the World Meteorological Organization (WMO) and the U.N. Environment Programme, which in early 2021 collectively convened a focus group to begin developing best practices for AI use in disaster management. These include enhancing data collection, improving forecasting, and streamlining communications.

Read more: Cities Are on the Front Line of the ‘Climate-Health Crisis.’ A New Report Provides a Framework for Tackling Its Effects

“What I find exciting is, for one type of hazard, there are so many different ways that AI can be applied and this creates a lot of opportunities,” says Monique Kuglitsch, who chaired the focus group. Take hurricanes for example: In 2023, researchers showed AI could help policymakers identify the best places to put traffic sensors to detect road blockages after tropical storms in Tallahassee, Fla. And in October, meteorologists used AI weather forecasting models to accurately predict that Hurricane Milton would land near Siesta Key, Florida. AI is also being used to alert members of the public more efficiently. Last year, The National Weather Service announced a partnership with AI translation company Lilt to help deliver forecasts in Spanish and simplified Chinese, which it says can reduce the time to translate a hurricane warning from an hour to 10 minutes.

Besides helping communities prepare for disasters, AI is also being used to coordinate response efforts. Following both Hurricane Milton and Hurricane Ian, non-profit GiveDirectly used Google’s machine learning models to analyze pre- and post-satellite images to identify the worst affected areas, and prioritize cash grants accordingly. Last year AI analysis of aerial images was deployed in cities like Quelimane, Mozambique, after Cyclone Freddy and Adıyaman, Turkey, after a 7.8 magnitude earthquake, to aid response efforts.

Read more: How Meteorologists Are Using AI to Forecast Hurricane Milton and Other Storms

Operating early warning systems is primarily a governmental responsibility, but AI climate modeling—and, to a lesser extent, earthquake detection—has become a burgeoning private industry. Start-up SeismicAI says it’s working with the civil protection agencies in the Mexican states of Guerrero and Jalisco to deploy an AI-enhanced network of sensors, which would detect earthquakes in real-time. Tech giants Google, Nvidia, and Huawei are partnering with European forecasters and say their AI-driven models can generate accurate medium-term forecasts thousands of times more quickly than traditional models, while being less computationally intensive. And in September, IBM partnered with NASA to release a general-purpose open-source model that can be used for various climate-modeling cases, and which runs on a desktop.

AI advances

While machine learning techniques have been incorporated into weather forecasting models for many years, recent advances have allowed many new models to be built using AI from the ground-up, improving the accuracy and speed of forecasting. Traditional models, which rely on complex physics-based equations to simulate interactions between water and air in the atmosphere and require supercomputers to run, can take hours to generate a single forecast. In contrast, AI weather models learn to spot patterns by training on decades of climate data, most of which was collected via satellites and ground-based sensors and shared through intergovernmental collaboration.

Both AI and physics-based forecasts work by dividing the world into a three-dimensional grid of boxes and then determining variables like temperature and wind speed. But because AI models are more computationally efficient, they can create much finer-grained grids. For example, the the European Centre for Medium-Range Weather Forecasts’ highest resolution model breaks the world into 5.5 mile boxes, whereas forecasting startup Atmo offers models finer than one square mile. This bump in resolution can allow for more efficient allocation of resources during extreme weather events, which is particularly important for cities, says Johan Mathe, co-founder and CTO of the company, which earlier this year inked deals with the Philippines and the island nation of Tuvalu.

Limitations

AI-driven models are typically only as good as the data they are trained on, which can be a limiting factor in some places. “When you’re in a really high stakes situation, like a disaster, you need to be able to rely on the model output,” says Kuglitsch. Poorer regions—often on the frontlines of climate-related disasters—typically have fewer and worse-maintained weather sensors, for example, creating gaps in meteorological data. AI systems trained on this skewed data can be less accurate in the places most vulnerable to disasters. And unlike physics-based models, which follow set rules, as AI models become more complex, they increasingly operate as sophisticated ‘black boxes,’ where the path from input to output becomes less transparent. The U.N. initiative’s focus is on developing guidelines for using AI responsibly. Kuglitsch says standards could, for example, encourage developers to disclose a model’s limitations or ensure systems work across regional boundaries.

The initiative will test its recommendations in the field by collaborating with the Mediterranean and pan-European forecast and Early Warning System Against natural hazards (MedEWSa), a project that spun out of the focus group. “We’re going to be applying the best practices from the focus group and getting a feedback loop going, to figure out which of the best practices are easiest to follow,” Kuglitsch says. One MedEWSa pilot project will explore machine learning to predict the occurrence of wildfires an area around Athens, Greece. Another will use AI to improve flooding and landslide warnings in the area surrounding Tbilisi city, Georgia.

Read more: How the Cement Industry Is Creating Carbon-Negative Building Materials

Meanwhile, private companies like Tomorrow.io are seeking to plug these gaps by collecting their own data. The AI weather forecasting start-up has launched satellites with radar and other meteorological sensors to collect data from regions that lack ground-based sensors, which it combines with historical data to train its models. Tomorrow.io’s technology is being used by New England cities including Boston, to help city officials decide when to salt the roads ahead of snowfall. It’s also used by Uber and Delta Airlines.

Another U.N. initiative, the Systematic Observations Financing Facility (SOFF), also aims to close the weather data gap by providing financing and technical assistance in poorer countries. Johan Stander, director of services for the WMO, one of SOFF’s partners, says the WMO is working with private AI developers including Google and Microsoft, but stresses the importance of not handing off too much responsibility to AI systems.

“You can’t go to a machine and say, ‘OK, you were wrong. Answer me, what’s going on?’ You still need somebody to take that ownership,” he says. He sees private companies’ role as “supporting the national met services, instead of trying to take them over.”

TIME100 Impact Dinner London: AI Leaders Discuss Responsibility, Regulation, and Text as a ‘Relic of the Past’

17 October 2024 at 01:03

On Wednesday, luminaries in the field of AI gathered at Serpentine North, a former gunpowder store turned exhibition space, for the inaugural TIME100 Impact Dinner London. Following a similar event held in San Francisco last month, the dinner convened influential leaders, experts, and honorees of TIME’s 2023 and 2024 100 Influential People in AI lists—all of whom are playing a role in shaping the future of the technology.

[time-brightcove not-tgx=”true”]

Following a discussion between TIME’s CEO Jessica Sibley and executives from the event’s sponsors—Rosanne Kincaid-Smith, group chief operating officer at Northern Data Group, and Jaap Zuiderveld, Nvidia’s VP of Europe, the Middle East, and Africa—and after the main course had been served, attention turned to a panel discussion.

The panel featured TIME 100 AI honorees Jade Leung, CTO at the U.K. AI Safety Institute, an institution established last year to evaluate the capabilities of cutting-edge AI models; Victor Riparbelli, CEO and co-founder of the UK-based AI video communications company Synthesia; and Abeba Birhane, a cognitive scientist and adjunct assistant professor at the School of Computer Science and Statistics at Trinity College Dublin, whose research focuses on auditing AI models to uncover empirical harms. Moderated by TIME senior editor Ayesha Javed, the discussion focused on the current state of AI and its associated challenges, the question of who bears responsibility for AI’s impacts, and the potential of AI-generated videos to transform how we communicate.

The panelists’ views on the risks posed by AI reflected their various focus areas. For Leung, whose work involves assessing whether cutting-edge AI models could be used to facilitate cyber, biological or chemical attacks, and evaluating models for any other harmful capabilities more broadly, focus was on the need to “get our heads around the empirical data that will tell us much more about what’s coming down the pike and what kind of risks are associated with it.”

Birhane, meanwhile, emphasized what she sees as the “massive hype” around AI’s capabilities and potential to pose existential risk. “These models don’t actually live up to their claims.” Birhane argued that “AI is not just computational calculations. It’s the entire pipeline that makes it possible to build and to sustain systems,” citing the importance of paying attention to where data comes from, the environmental impacts of AI systems (particularly in relation to their energy and water use), and the underpaid labor of data-labellers as examples. “There has to be an incentive for both big companies and for startups to do thorough evaluations on not just the models themselves, but the entire AI pipeline,” she said. Riparbelli suggested that both “fixing the problems already in society today” and thinking about “Terminator-style scenarios” are important, and worth paying attention to.

Panelists agreed on the vital importance of evaluations for AI systems, both to understand their capabilities and to discern their shortfalls when it comes to issues, such as the perpetuation of prejudice. Because of the complexity of the technology and the speed at which the field is moving, “best practices for how you deal with different safety challenges change very quickly,” Leung said, pointing to a “big asymmetry between what is known publicly to academics and to civil society, and what is known within these companies themselves.”

The panelists further agreed that both companies and governments have a role to play in minimizing the risks posed by AI. “There’s a huge onus on companies to continue to innovate on safety practices,” said Leung. Riparbelli agreed, suggesting companies may have a “moral imperative” to ensure their systems are safe. At the same time, “governments have to play a role here. That’s completely non-negotiable,” said Leung.

Equally, Birhane was clear that “effective regulation” based on “empirical evidence” is necessary. “A lot of governments and policy makers see AI as an opportunity, a way to develop the economy for financial gain,” she said, pointing to tensions between economic incentives and the interests of disadvantaged groups. “Governments need to see evaluations and regulation as a mechanism to create better AI systems, to benefit the general public and people at the bottom of society.”

When it comes to global governance, Leung emphasized the need for clarity on what kinds of guardrails would be most desirable, from both a technical and policy perspective. “What are the best practices, standards, and protocols that we want to harmonize across jurisdictions?” she asked. “It’s not a sufficiently-resourced question.” Still, Leung pointed to the fact that China was party to last year’s AI Safety Summit hosted by the U.K. as cause for optimism. “It’s very important to make sure that they’re around the table,” she said. 

One concrete area where we can observe the advance of AI capabilities in real-time is AI-generated video. In a synthetic video created by his company’s technology, Riparbelli’s AI double declared “text as a technology is ultimately transitory and will become a relic of the past.” Expanding on the thought, the real Riparbelli said: “We’ve always strived towards more intuitive, direct ways of communication. Text was the original way we could store and encode information and share time and space. Now we live in a world where for most consumers, at least, they prefer to watch and listen to their content.” 

He envisions a world where AI bridges the gap between text, which is quick to create, and video, which is more labor-intensive but also more engaging. AI will “enable anyone to create a Hollywood film from their bedroom without needing more than their imagination,” he said. This technology poses obvious challenges in terms of its ability to be abused, for example by creating deepfakes or spreading misinformation, but Riparbelli emphasizes that his company takes steps to prevent this, noting that “every video, before it gets generated, goes through a content moderation process where we make sure it fits within our content policies.”

Riparbelli suggests that rather than a “technology-centric” approach to regulation on AI, the focus should be on designing policies that reduce harmful outcomes. “Let’s focus on the things we don’t want to happen and regulate around those.”

The TIME100 Impact Dinner London: Leaders Shaping the Future of AI was presented by Northern Data Group and Nvidia Europe.

Why Vinod Khosla Is All In on AI

22 September 2024 at 11:00
Vinod Khosla, Founder, Khosla Ventures

(To receive weekly emails of conversations with the world’s top CEOs and decisionmakers, click here.)

When Vinod Khosla had a skiing accident in 2011 that led to an ACL injury in his knee, doctors gave conflicting opinions over his treatment. Frustrated with the healthcare system, the leading venture capitalist proffered, in a hotly debated article, that AI algorithms could do the job better than doctors. Since then, Khosla’s firm has invested in a number of robotics and medtech companies, including Rad AI, a radiology tech company. The self-professed techno-optimist still stands by his assertions a decade later. “Almost all expertise will be free in an AI model, and we’ll have plenty of these for the benefit of humanity,” he told TIME in an interview in August.

[time-brightcove not-tgx=”true”]

One of Silicon Valley’s most prominent figures, Khosla, 69, co-founded the influential computing company Sun Microsystems in the 1980s, which he eventually sold to Oracle in 2010. His venture capital firm Khosla Ventures has subsequently placed big bets on green tech, healthcare, and AI startups around the world—including an early investment of $50 million in 2019 in OpenAI. When OpenAI’s CEO, Sam Altman, was briefly fired last year, Khosla was one of the investors who spoke out about wanting Altman back in the top job. “I was very vocal that we needed to get rid of those, frankly, EA [Effective Altruism] nuts, who were really just religious bigots,” he said, referring to the company’s board members who orchestrated the ousting. He contends with their concerns: “Humanity faces risks and we have to manage them,” he said, “but that doesn’t mean we completely forgo the benefits of especially powerful technologies like AI.”

Khosla, one of the TIME100 Most Influential People in AI in 2024, is a firm believer that AI can replace jobs, including those performed by teachers and doctors, and enable a future where humans are free from servitude. “Because of AI, we will have enough abundance to choose what to do and what not to do,” he said.

This interview has been condensed and edited for clarity.

Khosla Ventures has been at the forefront of investing in AI and tech. How do you decide what to put your bets on, and what’s your approach to innovation?

I first mentioned AI publicly in 2000, when I said that AI would redefine what it means to be human. Ten years later, I wrote a blog post called “Do we need doctors?” In that post, I focused on almost all expertise that will be free through AI for the benefit of humanity. In 2014, we made our first deep learning investment around AI for images, and soon after, we invested in AI radiology. In late 2018, we decided to commit to investing in OpenAI. That was a big, big bet for us, and I normally don’t make bets that large. But we want to invest in high-risk technical breakthroughs and science experiments. Our focus here is on what’s bold, early, and impactful. OpenAI was very bold, very early. Nobody was talking about investing in AI and it was obviously very impactful.

You were one of the early investors in OpenAI. What role did you play in bringing Sam Altman back into his role as CEO last year?

I don’t want to go into too much detail as I don’t think I was the pivotal person doing that, but I was definitely very supportive [of Altman]. I wrote a public blog post that Thanksgiving weekend, and I was very vocal that we needed to get rid of those, frankly, EA [Effective Altruism] nuts, who were really just religious bigots. Humanity faces risks and we have to manage them, but that doesn’t mean we completely forgo the benefits of especially powerful technologies like AI.

What risks do you think AI poses now and in 10 years? And how do you propose to manage those risks?

There was a paper from Anthropic that looked at the issue of explainability of these models. We’re nowhere near where we need to be, but that is still making progress. Some researchers are dedicated full-time to this issue of ‘how do you characterize models and how do you get them to behave in the way we want them to behave?’ It’s a complex question, but we will have the technical tools if we put the effort in to ensure safety. In fact, I believe the principal area where national funding in universities should go is researchers doing safety research. I do think explainability will get better and better progressively over the next decade. But to demand it be fully developed before it is deployed would be going too far. For example, KV [Khosla Ventures] is one of the few not assuming that only large language models will work for AI, or that you don’t need other types of AI models. And we are doing that by investing in a U.K. startup called Symbolica AI that’s using a completely different approach to AI. They’ll work in conjunction with language models, but fundamentally, explainability comes for free with those models. Because these will be explainable models, they’ll also be computationally much more efficient if they work. Now there’s a big ‘if’ in if they work, but that doesn’t mean we shouldn’t try. I’d rather try and fail than fail. To try is my general philosophy.

You’re saying that explainability can help mitigate the risk. But what onus does it put on the makers of this technology—the Sam Altmans of the world—to ensure that they are listening to this research and integrating that thinking into the technology itself?

I don’t believe any of the major model makers are ignoring it. Obviously, they don’t want to share all the proprietary work they’re doing, and each one has a slightly different approach. And so sharing everything they’re doing after spending billions of dollars is just not a good capitalistic approach, but that does not mean they’re not paying attention. I believe everybody is. And frankly, safety becomes more of an issue when you get to things like robotics. 

You’ve spoken of a future where labor is free and humans are free of servitude. I’m wondering about the flip side of that. When we’re talking about replacing things like primary healthcare with AI, how does that shift the labor market, and how do we reimagine jobs in the future?

It’s very hard to predict everything, and we like to predict everything before we let it happen. But society evolves in a way that’s evolutionary, and these technologies will be evolutionary. I’m very optimistic that every professional will get an AI intern for the next 10 years. We saw that with self-driving cars. Think of it as every software programmer can have a software intern programmer, every physician can have a physician intern, every structural engineer can have a structural engineer intern, and much more care or use of this expertise will be possible with that human oversight that will happen for the next decade. And in fact, the impact of that on the economy should be deflationary, because expertise starts to become cheaper or hugely multiplied. One teacher can do the job of five teachers because five AI interns help them. 

That’s interesting because you’re suggesting almost a coexistence with AI that complements or optimizes the work. But do you see it eventually replacing those jobs?

I think these will be society’s choices, right? It’s too early to tell what’s there, and we know the next decade will be about this internship of AI expertise idea, in conjunction with humans. The average primary care doctor in America sees the average patient once a year. In Australia, it’s four or five times a year because they have a different doctor-patient ratio. Well, America could become like Australia without producing 5 more doctors. All these effects are hard to predict, but it’s very clear what the next decade will be like. We’ve seen it in self-driving cars. Apply that model to everything, and then you can let them go and do more and more, and society gets to choose. I do think in the long term, in 30, 40, 50 years, the need to work will disappear. The majority of jobs in this country, in most parts of the world, are not desirable jobs, and I think we will have enough abundance because of AI to choose what to do, and what not to do. Maybe there will be many more kids becoming like Simone Biles or striving to be the next basketball star. I do think society will make most of these choices, not technology, of what is permitted and what isn’t.

You’ve publicly disagreed with Lina Khan’s approach to the FTC. What role can regulators play in this need to strike a balance between investing in radical, untested new technologies at scale, and enforcement and regulation to make sure they are safe to use?

I think regulation has a role to play. How much, and when, are critical nuances. We can’t slow down this development and fall behind China. I’ve been very, very clear and hawkish on China because we are in the race for technology dominance with them. This is not in isolation. The Europeans have sort of regulated themselves out of any technology developments, frankly, around all the major areas, including AI. That’s going too far. But I thought the executive order that President Biden issued was a reasonably balanced one. Many, many people had input into that process, and I think that’s the right balanced hand.

Can you expand on where you see dominance within the global AI race? Do you think countries like Japan and India can become global AI leaders?

In the West, it’s pretty clear there will be a couple of dominant models. Places like Google, OpenAI, Meta, and Anthropic will have state-of-the-art models. So there won’t be 50 players in the West, but there will be a few, a handful, as it currently appears. Now, that doesn’t mean the world has to depend on the American models. In Japan, for example, even the Kanji script is very different, as are their national defense needs. They want to be independent. If AI is going to play a role in national defense, they will have to rely on a Japanese model. The same thing in India. If China has its own model, India will have its own model. And so national models will exist. There’s Mistral in the E.U., and that’s a trend we recognized very early, and we were the first to invest in this idea that countries and regions with large populations will want their own models.

In thinking about these nation models, how do you ensure greater equitable distribution of the benefits of AI around the world?

I do think we have to pay attention to ensuring it, but I’m relatively optimistic it will happen automatically. In India, for example, the government’s Aadhaar payment system has essentially eliminated Visa and MasterCard in their [fee] of 3% on all transactions. I’ve argued that if that same system is the key to providing AI services, a primary care doctor and an AI tutor for everybody should be included in the same service. It wouldn’t cost very much to do it. I actually think many of these will become free government services and much more accessible generally. We’ve seen that happen with other technologies, like the internet. It was expensive in 1996, and now the smartphone has become pretty pervasive in the West and is slowly becoming pervasive in the developing world too.

At TIME100 Impact Dinner, AI Leaders Discuss the Technology’s Transformative Potential

17 September 2024 at 04:55

Inventor and futurist Ray Kurzweil, researcher and Brookings Institution fellow Chinasa T. Okolo, director of the U.S. Artificial Safety Institute (AISI) Elizabeth Kelly, and Cognizant CEO Ravi Kumar S, discussed the transformative power of AI during a panel at a TIME100 Impact Dinner in San Francisco on Monday. During the discussion, which was moderated by TIME’s editor-in-chief Sam Jacobs, Kurzweil predicted that we will achieve Artificial General Intelligence (AGI), a type of AI that might be smarter than humans, by 2029.

[time-brightcove not-tgx=”true”]

“Nobody really took it seriously until now,” Kurzweil said about AI. “People are convinced it’s going to either endow us with things we’d never had before, or it’s going to kill us.”

Cognizant sponsored Monday’s event, which celebrated the 100 most influential people leading change in AI. The TIME100 AI spotlights computer scientists, business leaders, policymakers, advocates, and others at the forefront of big changes in the industry. Jacobs probed the four panelists—three of whom were named to the 2024 list—about the opportunities and challenges presented by AI’s rapid advancement.

Kumar discussed the potential economic impact of generative AI and cited a new report from Cognizant which says that generative AI could add more than a trillion dollars annually to the US economy by 2032. He identified key constraints holding back widespread adoption, including the need for improved accuracy, cost-performance, responsible AI practices, and explainable outputs. “If you don’t get productivity,” he said, “task automation is not going to lead to a business case stacking up behind it.”

Okolo highlighted the growth of AI initiatives in Africa and the Global South, citing the work of professor Vukosi Marivate from the University of Pretoria in South Africa, who has inspired a new generation of researchers within and outside the continent. However, Okolo acknowledged the mixed progress in improving the diversity of languages informing AI models, with grassroots communities in Africa leading the charge despite limited support and funding.

Kurzweil said that he was excited about the potential of simulated biology to revolutionize drug discovery and development. By simulating billions of interactions in a matter of days, he noted, researchers can accelerate the process of finding treatments for diseases like cancer and Alzheimer’s. He also provided a long-term perspective on the exponential growth of computational power, predicting a sharper so-called S-curve (a slow start, then rapid growth before leveling off) for AI disruption compared to previous technological revolutions.

Read more: The TIME100 Most Influential People in AI 2024

Kelly addressed concerns about AI’s potential for content manipulation in the context of the 2024 elections and beyond. “It’s going to matter this year, but it’s going to matter every year more and more as we move forward,” she noted. She added that AISI is working to advance the science to detect synthetically created content and authenticate genuine information.

Kelly also noted that lawmakers have been focusing on AI’s risks and benefits for some time, with initiatives like the AI Bill of Rights and the AI Risk Management Framework. “The president likes to use the phrase ‘promise and peril,’ which I think pretty well captures it, because we are incredibly excited about stimulant biology and drug discovery and development while being aware of the flip side risks,” she said.

As the panel drew to a close, Okolo urged attendees, which included nearly 50 other past and present TIME100 AI honorees, to think critically about how they develop and apply AI and to try to ensure that it reaches people in underrepresented regions in a positive way. 

“A lot of times you talk about the benefits that AI has brought, you know, to people. And a lot of these people are honestly concentrated in one region of the world,” she said. “We really have to look back, or maybe, like, step back and think broader,” she implored, asking leaders in the industry to think about people from Africa to South America to South Asia and Southeast Asia. “How can they benefit from these technologies, without necessarily exploiting them in the process?”

The TIME100 Impact Dinner: Leaders Shaping the Future of AI was presented by Cognizant and Northern Data Group.

At TIME100 Impact Dinner, AI Leaders Talk Reshaping the Future of AI

17 September 2024 at 04:55

TIME hosted its inaugural TIME100 Impact Dinner: Leaders Shaping the Future of AI, in San Francisco on Monday evening. The event kicked off a weeklong celebration of the TIME100 AI, a list that recognizes the 100 most influential individuals in artificial intelligence across industries and geographies and showcases the technology’s rapid evolution and far-reaching impact. 

TIME CEO Jessica Sibley set the tone for the evening, highlighting the diversity and dynamism of the 2024 TIME100 AI list. With 91 newcomers from last year’s inaugural list and honorees ranging from 15 to 77 years old, the group reflects the field’s explosive growth and its ability to attract talent from all walks of life.

[time-brightcove not-tgx=”true”]

Read More: At TIME100 Impact Dinner, AI Leaders Discuss the Technology’s Transformative Potential

The heart of the evening centered around three powerful toasts delivered by distinguished AI leaders, each offering a unique perspective on the transformative potential of AI and the responsibilities that come with it.

Reimagining power structures

Amba Kak, co-executive director of the AI Now Institute, delivered a toast that challenged attendees to look beyond the technical aspects of AI and consider its broader societal implications. Kak emphasized the “mirror to the world” quality of AI, reflecting existing power structures and norms through data and design choices.

“The question of ‘what kind of AI we want’ is really an opening to revisit the more fundamental question of ‘what is the kind of world we want, and how can AI get us there?’” Kak said. She highlighted the importance of democratizing AI decision-making, ensuring that those affected by AI systems have a say in their deployment.

Kak said she drew inspiration from frontline workers and advocates pushing back against the misuse of AI, including nurses’ unions staking their claim in clinical AI deployment and artists defending human creativity. Her toast served as a rallying cry for a more inclusive and equitable AI future.

[video id=hiE0IRej]

Amplifying creativity and breaking barriers

Comedian, filmmaker, and AI storyteller King Willonius emphasized AI’s role in lowering the bar for who can be creative and giving voice to underrepresented communities. Willonius shared his personal journey of discovery with AI-assisted music composition, illustrating how AI can unlock new realms of creative expression.

“AI doesn’t just automate—it amplifies,” he said. “It breaks down barriers, giving voices to those who were too often left unheard.” He highlighted the work of his company, Blerd Factory, in leveraging AI to empower creators from diverse backgrounds.

Willonius’ toast struck a balance between enthusiasm for AI’s creative potential and a call for responsible development. He emphasized the need to guide AI technology in ways that unite rather than divide, envisioning a future where AI fosters empathy and global connection.

[video id=78d6ibMo]

Accelerating scientific progress

AMD CEO Lisa Su delivered a toast that underscored AI’s potential to address major global challenges. Su likened the current AI revolution to the dawn of the industrial era or the birth of the internet, emphasizing the unprecedented pace of innovation in the field.

She painted a picture of AI’s transformative potential across various domains, from materials science to climate change research, and said that she was inspired by AI’s applications in healthcare, envisioning a future where AI accelerates disease identification, drug development, and personalized medicine.

“I can see the day when we accelerate our ability to identify diseases, develop therapeutics, and ultimately find cures for the most important illnesses in the world,” Su said. Her toast was a call to action for leaders to seize the moment and work collaboratively to realize AI’s full potential while adhering to principles of transparency, fairness, and inclusion.

[video id=Wau2OTyu]

The TIME100 Impact Dinner: Leaders Shaping the Future of AI was presented by Cognizant and Northern Data Group.

❌
❌