Normal view
- Variety
- From Disney’s ‘Andor’ to Netflix’s ‘Cobra Kai’ and Max Original ‘Pubertat,’ Catalonia’s Rise As a Preferential Destination for a Wide Range of Top TV Series
- Variety
- U.K. Writers Decry ITV’s Plan to Use AI to Generate Show Ideas: ‘They Would Be Better Off Investing in Screenwriters Rather Than Gimmicks’
U.K. Writers Decry ITV’s Plan to Use AI to Generate Show Ideas: ‘They Would Be Better Off Investing in Screenwriters Rather Than Gimmicks’
- Variety
- Cristian Mungiu-Penned ‘Traffic,’ Directed by Teodora Ana Mihai, Wins at Warsaw Film Festival
Cristian Mungiu-Penned ‘Traffic,’ Directed by Teodora Ana Mihai, Wins at Warsaw Film Festival
Maori Dialog Favored in Warner Bros’ New Zealand Series ‘Tangata Pai’
Protein structure and design software gets the Chemistry Nobel
On Wednesday, the Nobel Committee announced that it had awarded the Nobel Prize in chemistry to researchers who pioneered major breakthroughs in computational chemistry. These include two researchers at Google's DeepMind in acknowledgment of their role in developing AI software that could take a raw protein sequence and use it to predict the three-dimensional structure the protein would adopt in cells. Separately, the University of Washington's David Baker was honored for developing software that could design entirely new proteins with specific structures.
The award makes for a bit of a theme for this year, as yesterday's Physics prize honored AI developments. In that case, the connection to physics seemed a bit tenuous, but here, there should be little question that the developments solved major problems in biochemistry.
Understanding protein structure
DeepMind, represented by Demis Hassabis and John Jumper, had developed AIs that managed to master games as diverse as chess and StarCraft. But it was always working on more significant problems in parallel, and in 2020, it surprised many people by announcing that it had tackled one of the biggest computational challenges in existence: the prediction of protein structures.
- Science – Ars Technica
- In stunning Nobel win, AI researchers Hopfield and Hinton take 2024 Physics Prize
In stunning Nobel win, AI researchers Hopfield and Hinton take 2024 Physics Prize
On Tuesday, the Royal Swedish Academy of Sciences awarded the 2024 Nobel Prize in Physics to John J. Hopfield of Princeton University and Geoffrey E. Hinton of the University of Toronto for their foundational work in machine learning with artificial neural networks. Hinton notably captured headlines in 2023 for warning about the threat that AI superintelligence may pose to humanity. The win came as a surprise to many, including Hinton himself.
"I'm flabbergasted. I had no idea this would happen. I'm very surprised," said Hinton in a telephone call with members of the Royal Swedish Academy of Sciences during a live announcement press conference streamed to YouTube that took place this morning.
Hopfield and Hinton's research, which dates back to the early 1980s, applied principles from physics to develop methods that underpin modern machine-learning techniques. Their work has enabled computers to perform tasks such as image recognition and pattern completion, capabilities that are now ubiquitous in everyday technology.
- Variety
- Wong Kar-wai’s ‘Blossoms Shanghai,’ Netflix’s ‘Cigarette Girl’ Win Top Prizes at Busan Streaming Awards
Wong Kar-wai’s ‘Blossoms Shanghai,’ Netflix’s ‘Cigarette Girl’ Win Top Prizes at Busan Streaming Awards
- Variety
- Artificial Intelligence Ally, Not Foe, Top Asian Executives Emphasize at Busan AI Conference
Artificial Intelligence Ally, Not Foe, Top Asian Executives Emphasize at Busan AI Conference
The more sophisticated AI models get, the more likely they are to lie
When a research team led by Amrit Kirpalani, a medical educator at Western University in Ontario, Canada, evaluated ChatGPT’s performance in diagnosing medical cases back in August 2024, one of the things that surprised them was the AI’s propensity to give well-structured, eloquent but blatantly wrong answers.
Now, in a study recently published in Nature, a different group of researchers tried to explain why ChatGPT and other large language models tend to do this. “To speak confidently about things we do not know is a problem of humanity in a lot of ways. And large language models are imitations of humans,” says Wout Schellaert, an AI researcher at the University of Valencia, Spain, and co-author of the paper.
Smooth operators
Early large language models like GPT-3 had a hard time answering simple questions about geography or science. They even struggled with performing simple math such as “how much is 20 +183.” But in most cases where they couldn’t identify the correct answer, they did what an honest human being would do: They avoided answering the question.
OpenAI Announces $6.6 Billion in Funding, Nearly Doubling Valuation to $157 Billion
Europe’s Top TV Commissioners Explain What They’re Looking for in Projects, Partnerships
Hawaii hikers report exploding guts as norovirus outbreak hits famous trail
The Hawaiian island of Kauai may not have any spewing lava, but hikers along the magnificent Napali coast have brought their own volcanic action recently, violently hollowing their innards amid the gushing waterfalls and deeply carved valleys.
Between August and early September, at least 50 hikers fell ill with norovirus along the famed Kalalau Trail, which has been closed since September 4 for a deep cleaning. The rugged 11-mile trail runs along the northwest coast of the island, giving adventurers breathtaking views of stunning sea cliffs and Kauai's lush valleys. It's situated just north of Waimea Canyon State Park, also known as the Grand Canyon of the Pacific.
"It’s one of the most beautiful places in the world. I feel really fortunate to be able to be there, and appreciate and respect that land,” one hiker who fell ill in late August told The Washington Post. "My guts exploding all over that land was not what I wanted to do at all."
AI chatbots might be better at swaying conspiracy theorists than humans
Belief in conspiracy theories is rampant, particularly in the US, where some estimates suggest as much as 50 percent of the population believes in at least one outlandish claim. And those beliefs are notoriously difficult to debunk. Challenge a committed conspiracy theorist with facts and evidence, and they'll usually just double down—a phenomenon psychologists usually attribute to motivated reasoning, i.e., a biased way of processing information.
A new paper published in the journal Science is challenging that conventional wisdom, however. Experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory showed that the interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual.
"These are some of the most fascinating results I've ever seen," co-author Gordon Pennycook, a psychologist at Cornell University, said during a media briefing. "The work overturns a lot of how we thought about conspiracies, that they're the result of various psychological motives and needs. [Participants] were remarkably responsive to evidence. There's been a lot of ink spilled about being in a post-truth world. It's really validating to know that evidence does matter. We can act in a more adaptive way using this new technology to get good evidence in front of people that is specifically relevant to what they think, so it's a much more powerful approach."
LLMs have a strong bias against use of African American English
As far back as 2016, work on AI-based chatbots revealed that they have a disturbing tendency to reflect some of the worst biases of the society that trained them. But as large language models have become ever larger and subjected to more sophisticated training, a lot of that problematic behavior has been ironed out. For example, I asked the current iteration of ChatGPT for five words it associated with African Americans, and it responded with things like "resilience" and "creativity."
But a lot of research has turned up examples where implicit biases can persist in people long after outward behavior has changed. So some researchers decided to test whether the same might be true of LLMs. And was it ever.
By interacting with a series of LLMs using examples of the African American English sociolect, they found that the AI's had an extremely negative view of its speakers—something that wasn't true of speakers of another American English variant. And that bias bled over into decisions the LLMs were asked to make about those who use African American English.
Passing part of a medical licensing exam doesn’t make ChatGPT a good doctor
ChatGPT was able to pass some of the United States Medical Licensing Exam (USMLE) tests in a study done in 2022. This year, a team of Canadian medical professionals checked to see if it’s any good at actual doctoring. And it’s not.
ChatGPT vs. Medscape
“Our source for medical questions was the Medscape questions bank,” said Amrit Kirpalani, a medical educator at the Western University in Ontario, Canada, who led the new research into ChatGPT’s performance as a diagnostic tool. The USMLE contained mostly multiple-choice test questions; Medscape has full medical cases based on real-world patients, complete with physical examination findings, laboratory test results, and so on.
The idea behind it is to make those cases challenging for medical practitioners due to complications like multiple comorbidities, where two or more diseases are present at the same time, and various diagnostic dilemmas that make the correct answers less obvious. Kirpalani’s team turned 150 of those Medscape cases into prompts that ChatGPT could understand and process.
People game AIs via game theory
In many cases, AIs are trained on material that's either made or curated by humans. As a result, it can become a significant challenge to keep the AI from replicating the biases of those humans and the society they belong to. And the stakes are high, given we're using AIs to make medical and financial decisions.
But some researchers at Washington University in St. Louis have found an additional wrinkle in these challenges: The people doing the training may potentially change their behavior when they know it can influence the future choices made by an AI. And, in at least some cases, they carry the changed behaviors into situations that don't involve AI training.
Would you like to play a game?
The work involved getting volunteers to participate in a simple form of game theory. Testers gave two participants a pot of money—$10, in this case. One of the two was then asked to offer some fraction of that money to the other, who could choose to accept or reject the offer. If the offer was rejected, nobody got any money.
Could AIs become conscious? Right now, we have no way to tell.
Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.
In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.
Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?
Lightening the load: AI helps exoskeleton work with different strides
Exoskeletons today look like something straight out of sci-fi. But the reality is they are nowhere near as robust as their fictional counterparts. They’re quite wobbly, and it takes long hours of handcrafting software policies, which regulate how they work—a process that has to be repeated for each individual user.
To bring the technology a bit closer to Avatar’s Skel Suits or Warhammer 40k power armor, a team at North Carolina University’s Lab of Biomechatronics and Intelligent Robotics used AI to build the first one-size-fits-all exoskeleton that supports walking, running, and stair-climbing. Critically, its software adapts itself to new users with no need for any user-specific adjustments. “You just wear it and it works,” says Hao Su, an associate professor and co-author of the study.
Tailor-made robots
An exoskeleton is a robot you wear to aid your movements—it makes walking, running, and other activities less taxing, the same way an e-bike adds extra watts on top of those you generate yourself, making pedaling easier. “The problem is, exoskeletons have a hard time understanding human intentions, whether you want to run or walk or climb stairs. It’s solved with locomotion recognition: systems that recognize human locomotion intentions,” says Su.
Researchers craft smiling robot face from living human skin cells
In a new study, researchers from the University of Tokyo, Harvard University, and the International Research Center for Neurointelligence have unveiled a technique for creating lifelike robotic skin using living human cells. As a proof of concept, the team engineered a small robotic face capable of smiling, covered entirely with a layer of pink living tissue.
The researchers note that using living skin tissue as a robot covering has benefits, as it's flexible enough to convey emotions and can potentially repair itself. "As the role of robots continues to evolve, the materials used to cover social robots need to exhibit lifelike functions, such as self-healing," wrote the researchers in the study.
Shoji Takeuchi, Michio Kawai, Minghao Nie, and Haruka Oda authored the study, titled "Perforation-type anchors inspired by skin ligament for robotic face covered with living skin," which is due for July publication in Cell Reports Physical Science. We learned of the study from a report published earlier this week by New Scientist.