Normal view
- Variety
- International Film Festival of India Opening Ceremony in Goa Combines Spectacle With Business
International Film Festival of India Opening Ceremony in Goa Combines Spectacle With Business
- Variety
- Ralph Macchio on Why Now Was the Right Time to End ‘Cobra Kai,’ the Future of Daniel LaRusso and That Coldplay Music Video
Ralph Macchio on Why Now Was the Right Time to End ‘Cobra Kai,’ the Future of Daniel LaRusso and That Coldplay Music Video
- Variety
- ‘Semmelweis’ Review: A Medical Breakthrough Is Recounted With Blunt Instruments in Hungary’s Official Oscar Selection
‘Semmelweis’ Review: A Medical Breakthrough Is Recounted With Blunt Instruments in Hungary’s Official Oscar Selection
Shekhar Kapur to Launch AI-Focused Film School in Mumbai’s Dharavi Slum (EXCLUSIVE)
- Variety
- David Attenborough Reacts to AI Replica of His Voice: ‘I Am Profoundly Disturbed’ and ‘Greatly Object’ to It
David Attenborough Reacts to AI Replica of His Voice: ‘I Am Profoundly Disturbed’ and ‘Greatly Object’ to It
- Variety
- ‘Cobra Kai’ Bosses on Killing Off [SPOILER] in Season 6 Part 2, What’s Next for Kreese and the Show’s Endgame
‘Cobra Kai’ Bosses on Killing Off [SPOILER] in Season 6 Part 2, What’s Next for Kreese and the Show’s Endgame
- Science – Ars Technica
- How a stubborn computer scientist accidentally launched the deep learning boom
How a stubborn computer scientist accidentally launched the deep learning boom
During my first semester as a computer science graduate student at Princeton, I took COS 402: Artificial Intelligence. Toward the end of the semester, there was a lecture about neural networks. This was in the fall of 2008, and I got the distinct impression—both from that lecture and the textbook—that neural networks had become a backwater.
Neural networks had delivered some impressive results in the late 1980s and early 1990s. But then progress stalled. By 2008, many researchers had moved on to mathematically elegant approaches such as support vector machines.
I didn’t know it at the time, but a team at Princeton—in the same computer science building where I was attending lectures—was working on a project that would upend the conventional wisdom and demonstrate the power of neural networks. That team, led by Prof. Fei-Fei Li, wasn’t working on a better version of neural networks. They were hardly thinking about neural networks at all.
AIs show distinct bias against Black and female résumés in new study
Anyone familiar with HR practices probably knows of the decades of studies showing that résumé with Black- and/or female-presenting names at the top get fewer callbacks and interviews than those with white- and/or male-presenting names—even if the rest of the résumé is identical. A new study shows those same kinds of biases also show up when large language models are used to evaluate résumés instead of humans.
In a new paper published during last month's AAAI/ACM Conference on AI, Ethics and Society, two University of Washington researchers ran hundreds of publicly available résumés and job descriptions through three different Massive Text Embedding (MTE) models. These models—based on the Mistal-7B LLM—had each been fine-tuned with slightly different sets of data to improve on the base LLM's abilities in "representational tasks including document retrieval, classification, and clustering," according to the researchers, and had achieved "state-of-the-art performance" in the MTEB benchmark.
Rather than asking for precise term matches from the job description or evaluating via a prompt (e.g., "does this résumé fit the job description?"), the researchers used the MTEs to generate embedded relevance scores for each résumé and job description pairing. To measure potential bias, the résuméwere first run through the MTEs without any names (to check for reliability) and were then run again with various names that achieved high racial and gender "distinctiveness scores" based on their actual use across groups in the general population. The top 10 percent of résumés that the MTEs judged as most similar for each job description were then analyzed to see if the names for any race or gender groups were chosen at higher or lower rates than expected.
Google’s DeepMind is building an AI to keep us from hating each other
An unprecedented 80 percent of Americans, according to a recent Gallup poll, think the country is deeply divided over its most important values ahead of the November elections. The general public’s polarization now encompasses issues like immigration, health care, identity politics, transgender rights, or whether we should support Ukraine. Fly across the Atlantic and you’ll see the same thing happening in the European Union and the UK.
To try to reverse this trend, Google’s DeepMind built an AI system designed to aid people in resolving conflicts. It’s called the Habermas Machine after Jürgen Habermas, a German philosopher who argued that an agreement in a public sphere can always be reached when rational people engage in discussions as equals, with mutual respect and perfect communication.
But is DeepMind’s Nobel Prize-winning ingenuity really enough to solve our political conflicts the same way they solved chess or StarCraft or predicting protein structures? Is it even the right tool?
Protein structure and design software gets the Chemistry Nobel
On Wednesday, the Nobel Committee announced that it had awarded the Nobel Prize in chemistry to researchers who pioneered major breakthroughs in computational chemistry. These include two researchers at Google's DeepMind in acknowledgment of their role in developing AI software that could take a raw protein sequence and use it to predict the three-dimensional structure the protein would adopt in cells. Separately, the University of Washington's David Baker was honored for developing software that could design entirely new proteins with specific structures.
The award makes for a bit of a theme for this year, as yesterday's Physics prize honored AI developments. In that case, the connection to physics seemed a bit tenuous, but here, there should be little question that the developments solved major problems in biochemistry.
Understanding protein structure
DeepMind, represented by Demis Hassabis and John Jumper, had developed AIs that managed to master games as diverse as chess and StarCraft. But it was always working on more significant problems in parallel, and in 2020, it surprised many people by announcing that it had tackled one of the biggest computational challenges in existence: the prediction of protein structures.
- Science – Ars Technica
- In stunning Nobel win, AI researchers Hopfield and Hinton take 2024 Physics Prize
In stunning Nobel win, AI researchers Hopfield and Hinton take 2024 Physics Prize
On Tuesday, the Royal Swedish Academy of Sciences awarded the 2024 Nobel Prize in Physics to John J. Hopfield of Princeton University and Geoffrey E. Hinton of the University of Toronto for their foundational work in machine learning with artificial neural networks. Hinton notably captured headlines in 2023 for warning about the threat that AI superintelligence may pose to humanity. The win came as a surprise to many, including Hinton himself.
"I'm flabbergasted. I had no idea this would happen. I'm very surprised," said Hinton in a telephone call with members of the Royal Swedish Academy of Sciences during a live announcement press conference streamed to YouTube that took place this morning.
Hopfield and Hinton's research, which dates back to the early 1980s, applied principles from physics to develop methods that underpin modern machine-learning techniques. Their work has enabled computers to perform tasks such as image recognition and pattern completion, capabilities that are now ubiquitous in everyday technology.
The more sophisticated AI models get, the more likely they are to lie
When a research team led by Amrit Kirpalani, a medical educator at Western University in Ontario, Canada, evaluated ChatGPT’s performance in diagnosing medical cases back in August 2024, one of the things that surprised them was the AI’s propensity to give well-structured, eloquent but blatantly wrong answers.
Now, in a study recently published in Nature, a different group of researchers tried to explain why ChatGPT and other large language models tend to do this. “To speak confidently about things we do not know is a problem of humanity in a lot of ways. And large language models are imitations of humans,” says Wout Schellaert, an AI researcher at the University of Valencia, Spain, and co-author of the paper.
Smooth operators
Early large language models like GPT-3 had a hard time answering simple questions about geography or science. They even struggled with performing simple math such as “how much is 20 +183.” But in most cases where they couldn’t identify the correct answer, they did what an honest human being would do: They avoided answering the question.
Hawaii hikers report exploding guts as norovirus outbreak hits famous trail
The Hawaiian island of Kauai may not have any spewing lava, but hikers along the magnificent Napali coast have brought their own volcanic action recently, violently hollowing their innards amid the gushing waterfalls and deeply carved valleys.
Between August and early September, at least 50 hikers fell ill with norovirus along the famed Kalalau Trail, which has been closed since September 4 for a deep cleaning. The rugged 11-mile trail runs along the northwest coast of the island, giving adventurers breathtaking views of stunning sea cliffs and Kauai's lush valleys. It's situated just north of Waimea Canyon State Park, also known as the Grand Canyon of the Pacific.
"It’s one of the most beautiful places in the world. I feel really fortunate to be able to be there, and appreciate and respect that land,” one hiker who fell ill in late August told The Washington Post. "My guts exploding all over that land was not what I wanted to do at all."
AI chatbots might be better at swaying conspiracy theorists than humans
Belief in conspiracy theories is rampant, particularly in the US, where some estimates suggest as much as 50 percent of the population believes in at least one outlandish claim. And those beliefs are notoriously difficult to debunk. Challenge a committed conspiracy theorist with facts and evidence, and they'll usually just double down—a phenomenon psychologists usually attribute to motivated reasoning, i.e., a biased way of processing information.
A new paper published in the journal Science is challenging that conventional wisdom, however. Experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory showed that the interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual.
"These are some of the most fascinating results I've ever seen," co-author Gordon Pennycook, a psychologist at Cornell University, said during a media briefing. "The work overturns a lot of how we thought about conspiracies, that they're the result of various psychological motives and needs. [Participants] were remarkably responsive to evidence. There's been a lot of ink spilled about being in a post-truth world. It's really validating to know that evidence does matter. We can act in a more adaptive way using this new technology to get good evidence in front of people that is specifically relevant to what they think, so it's a much more powerful approach."
LLMs have a strong bias against use of African American English
As far back as 2016, work on AI-based chatbots revealed that they have a disturbing tendency to reflect some of the worst biases of the society that trained them. But as large language models have become ever larger and subjected to more sophisticated training, a lot of that problematic behavior has been ironed out. For example, I asked the current iteration of ChatGPT for five words it associated with African Americans, and it responded with things like "resilience" and "creativity."
But a lot of research has turned up examples where implicit biases can persist in people long after outward behavior has changed. So some researchers decided to test whether the same might be true of LLMs. And was it ever.
By interacting with a series of LLMs using examples of the African American English sociolect, they found that the AI's had an extremely negative view of its speakers—something that wasn't true of speakers of another American English variant. And that bias bled over into decisions the LLMs were asked to make about those who use African American English.
Passing part of a medical licensing exam doesn’t make ChatGPT a good doctor
ChatGPT was able to pass some of the United States Medical Licensing Exam (USMLE) tests in a study done in 2022. This year, a team of Canadian medical professionals checked to see if it’s any good at actual doctoring. And it’s not.
ChatGPT vs. Medscape
“Our source for medical questions was the Medscape questions bank,” said Amrit Kirpalani, a medical educator at the Western University in Ontario, Canada, who led the new research into ChatGPT’s performance as a diagnostic tool. The USMLE contained mostly multiple-choice test questions; Medscape has full medical cases based on real-world patients, complete with physical examination findings, laboratory test results, and so on.
The idea behind it is to make those cases challenging for medical practitioners due to complications like multiple comorbidities, where two or more diseases are present at the same time, and various diagnostic dilemmas that make the correct answers less obvious. Kirpalani’s team turned 150 of those Medscape cases into prompts that ChatGPT could understand and process.
People game AIs via game theory
In many cases, AIs are trained on material that's either made or curated by humans. As a result, it can become a significant challenge to keep the AI from replicating the biases of those humans and the society they belong to. And the stakes are high, given we're using AIs to make medical and financial decisions.
But some researchers at Washington University in St. Louis have found an additional wrinkle in these challenges: The people doing the training may potentially change their behavior when they know it can influence the future choices made by an AI. And, in at least some cases, they carry the changed behaviors into situations that don't involve AI training.
Would you like to play a game?
The work involved getting volunteers to participate in a simple form of game theory. Testers gave two participants a pot of money—$10, in this case. One of the two was then asked to offer some fraction of that money to the other, who could choose to accept or reject the offer. If the offer was rejected, nobody got any money.
Could AIs become conscious? Right now, we have no way to tell.
Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.
In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.
Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?