Normal view

There are new articles available, click to refresh the page.
Yesterday — 21 October 2024Main stream

U.K. Writers Decry ITV’s Plan to Use AI to Generate Show Ideas: ‘They Would Be Better Off Investing in Screenwriters Rather Than Gimmicks’

21 October 2024 at 11:18
U.K. public service broadcaster ITV has come under fire after a job advert for a “head of generative AI innovation” went viral. The advert, which was posted on LinkedIn as well as other job sites, says the role will include spearheading “AI-driven innovations in content creation for TV shows, films, and digital-first content across ITV […]

Before yesterdayMain stream

Cristian Mungiu-Penned ‘Traffic,’ Directed by Teodora Ana Mihai, Wins at Warsaw Film Festival

19 October 2024 at 16:44
Teodora Ana Mihai’s “Traffic” was named the winner of the 40th Warsaw Film Festival on Saturday. The film was written by Cristian Mungiu, who won the Palme d’Or at Cannes with “4 Months, 3 Weeks and 2 Days,” and stars “Happening” lead actor Anamaria Vartolomei. “Traffic” focuses on Romanian immigrants in Belgium, who go from […]

Maori Dialog Favored in Warner Bros’ New Zealand Series ‘Tangata Pai’

15 October 2024 at 11:52
Production has begun on “Tangata Pai,” a Warner Bros. Discovery-backed drama that claims to be the first primetime series in which 30% of the dialog will be in the Maori language. The eight-part series tells the stories of five people whose worlds collide when a bomb is detonated at a peaceful Māori protest against a […]

Protein structure and design software gets the Chemistry Nobel

9 October 2024 at 14:55

On Wednesday, the Nobel Committee announced that it had awarded the Nobel Prize in chemistry to researchers who pioneered major breakthroughs in computational chemistry. These include two researchers at Google's DeepMind in acknowledgment of their role in developing AI software that could take a raw protein sequence and use it to predict the three-dimensional structure the protein would adopt in cells. Separately, the University of Washington's David Baker was honored for developing software that could design entirely new proteins with specific structures.

The award makes for a bit of a theme for this year, as yesterday's Physics prize honored AI developments. In that case, the connection to physics seemed a bit tenuous, but here, there should be little question that the developments solved major problems in biochemistry.

Understanding protein structure

DeepMind, represented by Demis Hassabis and John Jumper, had developed AIs that managed to master games as diverse as chess and StarCraft. But it was always working on more significant problems in parallel, and in 2020, it surprised many people by announcing that it had tackled one of the biggest computational challenges in existence: the prediction of protein structures.

Read full article

Comments

© Johan Jarnestad/The Royal Swedish Academy of Science

In stunning Nobel win, AI researchers Hopfield and Hinton take 2024 Physics Prize

8 October 2024 at 15:17

On Tuesday, the Royal Swedish Academy of Sciences awarded the 2024 Nobel Prize in Physics to John J. Hopfield of Princeton University and Geoffrey E. Hinton of the University of Toronto for their foundational work in machine learning with artificial neural networks. Hinton notably captured headlines in 2023 for warning about the threat that AI superintelligence may pose to humanity. The win came as a surprise to many, including Hinton himself.

"I'm flabbergasted. I had no idea this would happen. I'm very surprised," said Hinton in a telephone call with members of the Royal Swedish Academy of Sciences during a live announcement press conference streamed to YouTube that took place this morning.

Hopfield and Hinton's research, which dates back to the early 1980s, applied principles from physics to develop methods that underpin modern machine-learning techniques. Their work has enabled computers to perform tasks such as image recognition and pattern completion, capabilities that are now ubiquitous in everyday technology.

Read full article

Comments

© CHRISTOPH BURGSTEDT/SCIENCE PHOTO LIBRARY via Getty Images

Wong Kar-wai’s ‘Blossoms Shanghai,’ Netflix’s ‘Cigarette Girl’ Win Top Prizes at Busan Streaming Awards

6 October 2024 at 12:00
Wong Kar-wai’s series debut “Blossoms Shanghai” won two of the top awards at the Busan International Film Festival‘s sixth annual Asia Contents Awards and Global OTT (streaming) Awards on Sunday. The Tencent Video Chinese series won best creative and best male lead actor for Hu Ge. Kamila Andini and Ifa Isfansyah won best director for […]

Artificial Intelligence Ally, Not Foe, Top Asian Executives Emphasize at Busan AI Conference

6 October 2024 at 04:41
Artificial Intelligence (AI) being beneficial rather than harmful for Asia’s creative industries was the tenor of the opening sessions of the AI conference at the Busan Asian Contents and Film Market on Sunday. Jerry Chi, head of Japan at Stability AI, delivered a keynote address on AI innovation in Asian content. Chi showcased Stability AI’s […]

The more sophisticated AI models get, the more likely they are to lie

4 October 2024 at 19:39

When a research team led by Amrit Kirpalani, a medical educator at Western University in Ontario, Canada, evaluated ChatGPT’s performance in diagnosing medical cases back in August 2024, one of the things that surprised them was the AI’s propensity to give well-structured, eloquent but blatantly wrong answers.

Now, in a study recently published in Nature, a different group of researchers tried to explain why ChatGPT and other large language models tend to do this. “To speak confidently about things we do not know is a problem of humanity in a lot of ways. And large language models are imitations of humans,” says Wout Schellaert, an AI researcher at the University of Valencia, Spain, and co-author of the paper.

Smooth operators

Early large language models like GPT-3 had a hard time answering simple questions about geography or science. They even struggled with performing simple math such as “how much is 20 +183.” But in most cases where they couldn’t identify the correct answer, they did what an honest human being would do: They avoided answering the question.

Read full article

Comments

© malerapaso

OpenAI Announces $6.6 Billion in Funding, Nearly Doubling Valuation to $157 Billion

2 October 2024 at 17:26
Artificial-intelligence tech company OpenAI said it has raised $6.6 billion in new funding, giving it a massive post-money valuation of $157 billion, almost double its previous reported valuation of $80 billion earlier this year. The new round of funding was led by venture-capital firm Thrive Capital, with additional investors including Microsoft, Nvidia, SoftBank, Fidelity, Khosla […]

Europe’s Top TV Commissioners Explain What They’re Looking for in Projects, Partnerships

2 October 2024 at 07:58
Commissioners from four of Europe’s top public broadcasters assembled in Madrid on Tuesday for a roundtable discussion about what they’re looking for in potential scripted projects. Hosted by the Iberseries & Platino Industria forum and emceed by María Valenzuela, the panel included speakers Morad Koufane, France Télévisions’ director of international and young adult series; José […]

Hawaii hikers report exploding guts as norovirus outbreak hits famous trail

By: Beth Mole
18 September 2024 at 16:39
The Kalalau Valley between sheer cliffs in the Na Pali Coast State Park on the western shore of the island of Kauai in Hawaii, United States. This view is from the Pihea Trail in the Kokee State Park.

Enlarge / The Kalalau Valley between sheer cliffs in the Na Pali Coast State Park on the western shore of the island of Kauai in Hawaii, United States. This view is from the Pihea Trail in the Kokee State Park. (credit: Getty | Jon G. Fuller)

The Hawaiian island of Kauai may not have any spewing lava, but hikers along the magnificent Napali coast have brought their own volcanic action recently, violently hollowing their innards amid the gushing waterfalls and deeply carved valleys.

Between August and early September, at least 50 hikers fell ill with norovirus along the famed Kalalau Trail, which has been closed since September 4 for a deep cleaning. The rugged 11-mile trail runs along the northwest coast of the island, giving adventurers breathtaking views of stunning sea cliffs and Kauai's lush valleys. It's situated just north of Waimea Canyon State Park, also known as the Grand Canyon of the Pacific.

"It’s one of the most beautiful places in the world. I feel really fortunate to be able to be there, and appreciate and respect that land,” one hiker who fell ill in late August told The Washington Post. "My guts exploding all over that land was not what I wanted to do at all."

Read 7 remaining paragraphs | Comments

AI chatbots might be better at swaying conspiracy theorists than humans

12 September 2024 at 18:00
A woman wearing a sweatshirt for the QAnon conspiracy theory on October 11, 2020 in Ronkonkoma, New York.

Enlarge / A woman wearing a sweatshirt for the QAnon conspiracy theory on October 11, 2020 in Ronkonkoma, New York. (credit: Stephanie Keith | Getty Images)

Belief in conspiracy theories is rampant, particularly in the US, where some estimates suggest as much as 50 percent of the population believes in at least one outlandish claim. And those beliefs are notoriously difficult to debunk. Challenge a committed conspiracy theorist with facts and evidence, and they'll usually just double down—a phenomenon psychologists usually attribute to motivated reasoning, i.e., a biased way of processing information.

A new paper published in the journal Science is challenging that conventional wisdom, however. Experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory showed that the interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual.

"These are some of the most fascinating results I've ever seen," co-author Gordon Pennycook, a psychologist at Cornell University, said during a media briefing. "The work overturns a lot of how we thought about conspiracies, that they're the result of various psychological motives and needs. [Participants] were remarkably responsive to evidence. There's been a lot of ink spilled about being in a post-truth world. It's really validating to know that evidence does matter. We can act in a more adaptive way using this new technology to get good evidence in front of people that is specifically relevant to what they think, so it's a much more powerful approach."

Read 15 remaining paragraphs | Comments

LLMs have a strong bias against use of African American English

28 August 2024 at 15:00
LLMs have a strong bias against use of African American English

Enlarge (credit: Aurich Lawson | Getty Images)

As far back as 2016, work on AI-based chatbots revealed that they have a disturbing tendency to reflect some of the worst biases of the society that trained them. But as large language models have become ever larger and subjected to more sophisticated training, a lot of that problematic behavior has been ironed out. For example, I asked the current iteration of ChatGPT for five words it associated with African Americans, and it responded with things like "resilience" and "creativity."

But a lot of research has turned up examples where implicit biases can persist in people long after outward behavior has changed. So some researchers decided to test whether the same might be true of LLMs. And was it ever.

By interacting with a series of LLMs using examples of the African American English sociolect, they found that the AI's had an extremely negative view of its speakers—something that wasn't true of speakers of another American English variant. And that bias bled over into decisions the LLMs were asked to make about those who use African American English.

Read 14 remaining paragraphs | Comments

Passing part of a medical licensing exam doesn’t make ChatGPT a good doctor

16 August 2024 at 14:43
Smiling doctor discussing medical results with a woman.

Enlarge / For now, "you should see a doctor" remains good advice.

ChatGPT was able to pass some of the United States Medical Licensing Exam (USMLE) tests in a study done in 2022. This year, a team of Canadian medical professionals checked to see if it’s any good at actual doctoring. And it’s not.

ChatGPT vs. Medscape

“Our source for medical questions was the Medscape questions bank,” said Amrit Kirpalani, a medical educator at the Western University in Ontario, Canada, who led the new research into ChatGPT’s performance as a diagnostic tool. The USMLE contained mostly multiple-choice test questions; Medscape has full medical cases based on real-world patients, complete with physical examination findings, laboratory test results, and so on.

The idea behind it is to make those cases challenging for medical practitioners due to complications like multiple comorbidities, where two or more diseases are present at the same time, and various diagnostic dilemmas that make the correct answers less obvious. Kirpalani’s team turned 150 of those Medscape cases into prompts that ChatGPT could understand and process.

Read 13 remaining paragraphs | Comments

People game AIs via game theory

9 August 2024 at 20:13
A judge's gavel near a pile of small change.

Enlarge / In the experiments, people had to judge what constituted a fair monetary offer. (credit: manusapon kasosod)

In many cases, AIs are trained on material that's either made or curated by humans. As a result, it can become a significant challenge to keep the AI from replicating the biases of those humans and the society they belong to. And the stakes are high, given we're using AIs to make medical and financial decisions.

But some researchers at Washington University in St. Louis have found an additional wrinkle in these challenges: The people doing the training may potentially change their behavior when they know it can influence the future choices made by an AI. And, in at least some cases, they carry the changed behaviors into situations that don't involve AI training.

Would you like to play a game?

The work involved getting volunteers to participate in a simple form of game theory. Testers gave two participants a pot of money—$10, in this case. One of the two was then asked to offer some fraction of that money to the other, who could choose to accept or reject the offer. If the offer was rejected, nobody got any money.

Read 11 remaining paragraphs | Comments

Could AIs become conscious? Right now, we have no way to tell.

10 July 2024 at 11:00
Could AIs become conscious? Right now, we have no way to tell.

Enlarge (credit: BlackJack3D/Getty Images)

Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.

In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.

Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?

Read 50 remaining paragraphs | Comments

Lightening the load: AI helps exoskeleton work with different strides

1 July 2024 at 17:31
Image of two people using powered exoskeletons to move heavy items around, as seen in the movie Aliens.

Enlarge / Right now, the software doesn't do arms, so don't go taking on any aliens with it. (credit: 20th Century Fox)

Exoskeletons today look like something straight out of sci-fi. But the reality is they are nowhere near as robust as their fictional counterparts. They’re quite wobbly, and it takes long hours of handcrafting software policies, which regulate how they work—a process that has to be repeated for each individual user.

To bring the technology a bit closer to Avatar’s Skel Suits or Warhammer 40k power armor, a team at North Carolina University’s Lab of Biomechatronics and Intelligent Robotics used AI to build the first one-size-fits-all exoskeleton that supports walking, running, and stair-climbing. Critically, its software adapts itself to new users with no need for any user-specific adjustments. “You just wear it and it works,” says Hao Su, an associate professor and co-author of the study.

Tailor-made robots

An exoskeleton is a robot you wear to aid your movements—it makes walking, running, and other activities less taxing, the same way an e-bike adds extra watts on top of those you generate yourself, making pedaling easier. “The problem is, exoskeletons have a hard time understanding human intentions, whether you want to run or walk or climb stairs. It’s solved with locomotion recognition: systems that recognize human locomotion intentions,” says Su.

Read 11 remaining paragraphs | Comments

Researchers craft smiling robot face from living human skin cells

28 June 2024 at 15:14
A movable robotic face covered with living human skin cells.

Enlarge / A movable robotic face covered with living human skin cells. (credit: Takeuchi et al.)

In a new study, researchers from the University of Tokyo, Harvard University, and the International Research Center for Neurointelligence have unveiled a technique for creating lifelike robotic skin using living human cells. As a proof of concept, the team engineered a small robotic face capable of smiling, covered entirely with a layer of pink living tissue.

The researchers note that using living skin tissue as a robot covering has benefits, as it's flexible enough to convey emotions and can potentially repair itself. "As the role of robots continues to evolve, the materials used to cover social robots need to exhibit lifelike functions, such as self-healing," wrote the researchers in the study.

Shoji Takeuchi, Michio Kawai, Minghao Nie, and Haruka Oda authored the study, titled "Perforation-type anchors inspired by skin ligament for robotic face covered with living skin," which is due for July publication in Cell Reports Physical Science. We learned of the study from a report published earlier this week by New Scientist.

Read 13 remaining paragraphs | Comments

Researchers describe how to tell if ChatGPT is confabulating

20 June 2024 at 19:32
Researchers describe how to tell if ChatGPT is confabulating

Enlarge (credit: Aurich Lawson | Getty Images)

It's one of the world's worst-kept secrets that large language models give blatantly false answers to queries and do so with a confidence that's indistinguishable from when they get things right. There are a number of reasons for this. The AI could have been trained on misinformation; the answer could require some extrapolation from facts that the LLM isn't capable of; or some aspect of the LLM's training might have incentivized a falsehood.

But perhaps the simplest explanation is that an LLM doesn't recognize what constitutes a correct answer but is compelled to provide one. So it simply makes something up, a habit that has been termed confabulation.

Figuring out when an LLM is making something up would obviously have tremendous value, given how quickly people have started relying on them for everything from college essays to job applications. Now, researchers from the University of Oxford say they've found a relatively simple way to determine when LLMs appear to be confabulating that works with all popular models and across a broad range of subjects. And, in doing so, they develop evidence that most of the alternative facts LLMs provide are a product of confabulation.

Read 14 remaining paragraphs | Comments

❌
❌