Normal view

There are new articles available, click to refresh the page.
Today — 21 November 2024Main stream

Film Bazaar Work-in-Progress ‘Shape of Momo’ Aims to Reshape Women’s Position in Society

21 November 2024 at 06:54
Female director Tribeny Rai once took on hard labor in her native Sikkim, India, in order to prove that women are not weak. But she found that even those drastic efforts did not solve the problems of patriarchy and long-standing traditions that favor male decision-making, male children and even how women view their own roles […]

Yesterday — 20 November 2024Main stream

International Film Festival of India Opening Ceremony in Goa Combines Spectacle With Business

20 November 2024 at 16:22
The opening ceremony of the 55th International Film Festival of India (IFFI) in Goa on Wednesday crammed in several Bollywood-style performances. But it also took some time to deal with the business of cinema, as well. The spectacle came from “The Perfect Couple” breakout star Ishaan Khatter dancing to a medley of Bollywood hits and […]

Ralph Macchio on Why Now Was the Right Time to End ‘Cobra Kai,’ the Future of Daniel LaRusso and That Coldplay Music Video

20 November 2024 at 01:41
Serendipity seems to follow Ralph Macchio — and it most recently took him to Australia. In October, Coldplay released the song “The Karate Kid,” and it’s exactly what you think it’s about, down to the lyrics about “Daniel.” That, of course, is the name of the lead character played by Macchio in three “The Karate […]

Before yesterdayMain stream

‘Semmelweis’ Review: A Medical Breakthrough Is Recounted With Blunt Instruments in Hungary’s Official Oscar Selection

19 November 2024 at 22:30
The scream that pierces through the opening of “Semmelweis” sets the tone for the 19th century-set drama from Lajos Koltai, about the groundbreaking Hungarian obstetrician Ignaz Semmelweis, immediately showing its concern for a very pregnant young woman desperately roaming the streets for a proper place to give birth. Loath to check in to local clinics […]

Shekhar Kapur to Launch AI-Focused Film School in Mumbai’s Dharavi Slum (EXCLUSIVE)

18 November 2024 at 12:09
Celebrated filmmaker Shekhar Kapur has revealed plans to establish a film school in Mumbai’s Dharavi slum district, with a specific focus on AI technology in filmmaking. The initiative builds on Kapur’s decade-long experience running The Dharavi Project alongside Oscar-winning composer A.R. Rahman, a hip-hop and rap initiative in Dharavi in partnership with Universal Music that […]

David Attenborough Reacts to AI Replica of His Voice: ‘I Am Profoundly Disturbed’ and ‘Greatly Object’ to It

18 November 2024 at 11:55
Sir David Attenborough does not approve of AI being used to replicate his voice. In a BBC News segment on Sunday, an AI recreation of the famous British broadcaster’s voice speaking about his new series “Asia” was played next to a real recording, with little to no difference between the two. BBC researchers had found […]

‘Cobra Kai’ Bosses on Killing Off [SPOILER] in Season 6 Part 2, What’s Next for Kreese and the Show’s Endgame

15 November 2024 at 23:00
SPOILER ALERT: This article discusses plot details from the Season 6 Part 2 finale of “Cobra Kai,” now streaming on Netflix. Cobra Kai never dies. Until its students do. The final season of Netflix’s hit dramedy “Cobra Kai,” itself a spinoff of the “Karate Kid” franchise from the 1980s, is split into three installments. Part […]

How a stubborn computer scientist accidentally launched the deep learning boom

11 November 2024 at 12:00

During my first semester as a computer science graduate student at Princeton, I took COS 402: Artificial Intelligence. Toward the end of the semester, there was a lecture about neural networks. This was in the fall of 2008, and I got the distinct impression—both from that lecture and the textbook—that neural networks had become a backwater.

Neural networks had delivered some impressive results in the late 1980s and early 1990s. But then progress stalled. By 2008, many researchers had moved on to mathematically elegant approaches such as support vector machines.

I didn’t know it at the time, but a team at Princeton—in the same computer science building where I was attending lectures—was working on a project that would upend the conventional wisdom and demonstrate the power of neural networks. That team, led by Prof. Fei-Fei Li, wasn’t working on a better version of neural networks. They were hardly thinking about neural networks at all.

Read full article

Comments

© Aurich Lawson | Getty Images

AIs show distinct bias against Black and female résumés in new study

1 November 2024 at 16:59

Anyone familiar with HR practices probably knows of the decades of studies showing that résumé with Black- and/or female-presenting names at the top get fewer callbacks and interviews than those with white- and/or male-presenting names—even if the rest of the résumé is identical. A new study shows those same kinds of biases also show up when large language models are used to evaluate résumés instead of humans.

In a new paper published during last month's AAAI/ACM Conference on AI, Ethics and Society, two University of Washington researchers ran hundreds of publicly available résumés and job descriptions through three different Massive Text Embedding (MTE) models. These models—based on the Mistal-7B LLM—had each been fine-tuned with slightly different sets of data to improve on the base LLM's abilities in "representational tasks including document retrieval, classification, and clustering," according to the researchers, and had achieved "state-of-the-art performance" in the MTEB benchmark.

Rather than asking for precise term matches from the job description or evaluating via a prompt (e.g., "does this résumé fit the job description?"), the researchers used the MTEs to generate embedded relevance scores for each résumé and job description pairing. To measure potential bias, the résuméwere first run through the MTEs without any names (to check for reliability) and were then run again with various names that achieved high racial and gender "distinctiveness scores" based on their actual use across groups in the general population. The top 10 percent of résumés that the MTEs judged as most similar for each job description were then analyzed to see if the names for any race or gender groups were chosen at higher or lower rates than expected.

Read full article

Comments

Google’s DeepMind is building an AI to keep us from hating each other

24 October 2024 at 23:36

An unprecedented 80 percent of Americans, according to a recent Gallup poll, think the country is deeply divided over its most important values ahead of the November elections. The general public’s polarization now encompasses issues like immigration, health care, identity politics, transgender rights, or whether we should support Ukraine. Fly across the Atlantic and you’ll see the same thing happening in the European Union and the UK.

To try to reverse this trend, Google’s DeepMind built an AI system designed to aid people in resolving conflicts. It’s called the Habermas Machine after Jürgen Habermas, a German philosopher who argued that an agreement in a public sphere can always be reached when rational people engage in discussions as equals, with mutual respect and perfect communication.

But is DeepMind’s Nobel Prize-winning ingenuity really enough to solve our political conflicts the same way they solved chess or StarCraft or predicting protein structures? Is it even the right tool?

Read full article

Comments

© Jose Luis Pelaez Inc

Protein structure and design software gets the Chemistry Nobel

9 October 2024 at 14:55

On Wednesday, the Nobel Committee announced that it had awarded the Nobel Prize in chemistry to researchers who pioneered major breakthroughs in computational chemistry. These include two researchers at Google's DeepMind in acknowledgment of their role in developing AI software that could take a raw protein sequence and use it to predict the three-dimensional structure the protein would adopt in cells. Separately, the University of Washington's David Baker was honored for developing software that could design entirely new proteins with specific structures.

The award makes for a bit of a theme for this year, as yesterday's Physics prize honored AI developments. In that case, the connection to physics seemed a bit tenuous, but here, there should be little question that the developments solved major problems in biochemistry.

Understanding protein structure

DeepMind, represented by Demis Hassabis and John Jumper, had developed AIs that managed to master games as diverse as chess and StarCraft. But it was always working on more significant problems in parallel, and in 2020, it surprised many people by announcing that it had tackled one of the biggest computational challenges in existence: the prediction of protein structures.

Read full article

Comments

© Johan Jarnestad/The Royal Swedish Academy of Science

In stunning Nobel win, AI researchers Hopfield and Hinton take 2024 Physics Prize

8 October 2024 at 15:17

On Tuesday, the Royal Swedish Academy of Sciences awarded the 2024 Nobel Prize in Physics to John J. Hopfield of Princeton University and Geoffrey E. Hinton of the University of Toronto for their foundational work in machine learning with artificial neural networks. Hinton notably captured headlines in 2023 for warning about the threat that AI superintelligence may pose to humanity. The win came as a surprise to many, including Hinton himself.

"I'm flabbergasted. I had no idea this would happen. I'm very surprised," said Hinton in a telephone call with members of the Royal Swedish Academy of Sciences during a live announcement press conference streamed to YouTube that took place this morning.

Hopfield and Hinton's research, which dates back to the early 1980s, applied principles from physics to develop methods that underpin modern machine-learning techniques. Their work has enabled computers to perform tasks such as image recognition and pattern completion, capabilities that are now ubiquitous in everyday technology.

Read full article

Comments

© CHRISTOPH BURGSTEDT/SCIENCE PHOTO LIBRARY via Getty Images

The more sophisticated AI models get, the more likely they are to lie

4 October 2024 at 19:39

When a research team led by Amrit Kirpalani, a medical educator at Western University in Ontario, Canada, evaluated ChatGPT’s performance in diagnosing medical cases back in August 2024, one of the things that surprised them was the AI’s propensity to give well-structured, eloquent but blatantly wrong answers.

Now, in a study recently published in Nature, a different group of researchers tried to explain why ChatGPT and other large language models tend to do this. “To speak confidently about things we do not know is a problem of humanity in a lot of ways. And large language models are imitations of humans,” says Wout Schellaert, an AI researcher at the University of Valencia, Spain, and co-author of the paper.

Smooth operators

Early large language models like GPT-3 had a hard time answering simple questions about geography or science. They even struggled with performing simple math such as “how much is 20 +183.” But in most cases where they couldn’t identify the correct answer, they did what an honest human being would do: They avoided answering the question.

Read full article

Comments

© malerapaso

Hawaii hikers report exploding guts as norovirus outbreak hits famous trail

By: Beth Mole
18 September 2024 at 16:39
The Kalalau Valley between sheer cliffs in the Na Pali Coast State Park on the western shore of the island of Kauai in Hawaii, United States. This view is from the Pihea Trail in the Kokee State Park.

Enlarge / The Kalalau Valley between sheer cliffs in the Na Pali Coast State Park on the western shore of the island of Kauai in Hawaii, United States. This view is from the Pihea Trail in the Kokee State Park. (credit: Getty | Jon G. Fuller)

The Hawaiian island of Kauai may not have any spewing lava, but hikers along the magnificent Napali coast have brought their own volcanic action recently, violently hollowing their innards amid the gushing waterfalls and deeply carved valleys.

Between August and early September, at least 50 hikers fell ill with norovirus along the famed Kalalau Trail, which has been closed since September 4 for a deep cleaning. The rugged 11-mile trail runs along the northwest coast of the island, giving adventurers breathtaking views of stunning sea cliffs and Kauai's lush valleys. It's situated just north of Waimea Canyon State Park, also known as the Grand Canyon of the Pacific.

"It’s one of the most beautiful places in the world. I feel really fortunate to be able to be there, and appreciate and respect that land,” one hiker who fell ill in late August told The Washington Post. "My guts exploding all over that land was not what I wanted to do at all."

Read 7 remaining paragraphs | Comments

AI chatbots might be better at swaying conspiracy theorists than humans

12 September 2024 at 18:00
A woman wearing a sweatshirt for the QAnon conspiracy theory on October 11, 2020 in Ronkonkoma, New York.

Enlarge / A woman wearing a sweatshirt for the QAnon conspiracy theory on October 11, 2020 in Ronkonkoma, New York. (credit: Stephanie Keith | Getty Images)

Belief in conspiracy theories is rampant, particularly in the US, where some estimates suggest as much as 50 percent of the population believes in at least one outlandish claim. And those beliefs are notoriously difficult to debunk. Challenge a committed conspiracy theorist with facts and evidence, and they'll usually just double down—a phenomenon psychologists usually attribute to motivated reasoning, i.e., a biased way of processing information.

A new paper published in the journal Science is challenging that conventional wisdom, however. Experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory showed that the interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual.

"These are some of the most fascinating results I've ever seen," co-author Gordon Pennycook, a psychologist at Cornell University, said during a media briefing. "The work overturns a lot of how we thought about conspiracies, that they're the result of various psychological motives and needs. [Participants] were remarkably responsive to evidence. There's been a lot of ink spilled about being in a post-truth world. It's really validating to know that evidence does matter. We can act in a more adaptive way using this new technology to get good evidence in front of people that is specifically relevant to what they think, so it's a much more powerful approach."

Read 15 remaining paragraphs | Comments

LLMs have a strong bias against use of African American English

28 August 2024 at 15:00
LLMs have a strong bias against use of African American English

Enlarge (credit: Aurich Lawson | Getty Images)

As far back as 2016, work on AI-based chatbots revealed that they have a disturbing tendency to reflect some of the worst biases of the society that trained them. But as large language models have become ever larger and subjected to more sophisticated training, a lot of that problematic behavior has been ironed out. For example, I asked the current iteration of ChatGPT for five words it associated with African Americans, and it responded with things like "resilience" and "creativity."

But a lot of research has turned up examples where implicit biases can persist in people long after outward behavior has changed. So some researchers decided to test whether the same might be true of LLMs. And was it ever.

By interacting with a series of LLMs using examples of the African American English sociolect, they found that the AI's had an extremely negative view of its speakers—something that wasn't true of speakers of another American English variant. And that bias bled over into decisions the LLMs were asked to make about those who use African American English.

Read 14 remaining paragraphs | Comments

Passing part of a medical licensing exam doesn’t make ChatGPT a good doctor

16 August 2024 at 14:43
Smiling doctor discussing medical results with a woman.

Enlarge / For now, "you should see a doctor" remains good advice.

ChatGPT was able to pass some of the United States Medical Licensing Exam (USMLE) tests in a study done in 2022. This year, a team of Canadian medical professionals checked to see if it’s any good at actual doctoring. And it’s not.

ChatGPT vs. Medscape

“Our source for medical questions was the Medscape questions bank,” said Amrit Kirpalani, a medical educator at the Western University in Ontario, Canada, who led the new research into ChatGPT’s performance as a diagnostic tool. The USMLE contained mostly multiple-choice test questions; Medscape has full medical cases based on real-world patients, complete with physical examination findings, laboratory test results, and so on.

The idea behind it is to make those cases challenging for medical practitioners due to complications like multiple comorbidities, where two or more diseases are present at the same time, and various diagnostic dilemmas that make the correct answers less obvious. Kirpalani’s team turned 150 of those Medscape cases into prompts that ChatGPT could understand and process.

Read 13 remaining paragraphs | Comments

People game AIs via game theory

9 August 2024 at 20:13
A judge's gavel near a pile of small change.

Enlarge / In the experiments, people had to judge what constituted a fair monetary offer. (credit: manusapon kasosod)

In many cases, AIs are trained on material that's either made or curated by humans. As a result, it can become a significant challenge to keep the AI from replicating the biases of those humans and the society they belong to. And the stakes are high, given we're using AIs to make medical and financial decisions.

But some researchers at Washington University in St. Louis have found an additional wrinkle in these challenges: The people doing the training may potentially change their behavior when they know it can influence the future choices made by an AI. And, in at least some cases, they carry the changed behaviors into situations that don't involve AI training.

Would you like to play a game?

The work involved getting volunteers to participate in a simple form of game theory. Testers gave two participants a pot of money—$10, in this case. One of the two was then asked to offer some fraction of that money to the other, who could choose to accept or reject the offer. If the offer was rejected, nobody got any money.

Read 11 remaining paragraphs | Comments

Could AIs become conscious? Right now, we have no way to tell.

10 July 2024 at 11:00
Could AIs become conscious? Right now, we have no way to tell.

Enlarge (credit: BlackJack3D/Getty Images)

Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.

In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.

Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?

Read 50 remaining paragraphs | Comments

❌
❌