Reading view

There are new articles available, click to refresh the page.

Qubit that makes most errors obvious now available to customers

We're nearing the end of the year, and there are typically a flood of announcements regarding quantum computers around now, in part because some companies want to live up to promised schedules. Most of these involve evolutionary improvements on previous generations of hardware. But this year, we have something new: the first company to market with a new qubit technology.

The technology is called a dual-rail qubit, and it is intended to make the most common form of error trivially easy to detect in hardware, thus making error correction far more efficient. And, while tech giant Amazon has been experimenting with them, a startup called Quantum Circuits is the first to give the public access to dual-rail qubits via a cloud service.

While the tech is interesting on its own, it also provides us with a window into how the field as a whole is thinking about getting error-corrected quantum computing to work.

Read full article

Comments

© Quantum Circuits

Microsoft and Atom Computing combine for quantum error correction demo

In September, Microsoft made an unusual combination of announcements. It demonstrated progress with quantum error correction, something that will be needed for the technology to move much beyond the interesting demo phase, using hardware from a quantum computing startup called Quantinuum. At the same time, however, the company also announced that it was forming a partnership with a different startup, Atom Computing, which uses a different technology to make qubits available for computations.

Given that, it was probably inevitable that the folks in Redmond, Washington, would want to show that similar error correction techniques would also work with Atom Computing's hardware. It didn't take long, as the two companies are releasing a draft manuscript describing their work on error correction today. The paper serves as both a good summary of where things currently stand in the world of error correction, as well as a good look at some of the distinct features of computation using neutral atoms.

Atoms and errors

While we have various technologies that provide a way of storing and manipulating bits of quantum information, none of them can be operated error-free. At present, errors make it difficult to perform even the simplest computations that are clearly beyond the capabilities of classical computers. More sophisticated algorithms would inevitably encounter an error before they could be completed, a situation that would remain true even if we could somehow improve the hardware error rates of qubits by a factor of 1,000—something we're unlikely to ever be able to do.

Read full article

Comments

© Atom Computing

IBM boosts the amount of computation you can get done on quantum hardware

There's a general consensus that we won't be able to consistently perform sophisticated quantum calculations without the development of error-corrected quantum computing, which is unlikely to arrive until the end of the decade. It's still an open question, however, whether we could perform limited but useful calculations at an earlier point. IBM is one of the companies that's betting the answer is yes, and on Wednesday, it announced a series of developments aimed at making that possible.

On their own, none of the changes being announced are revolutionary. But collectively, changes across the hardware and software stacks have produced much more efficient and less error-prone operations. The net result is a system that supports the most complicated calculations yet on IBM's hardware, leaving the company optimistic that its users will find some calculations where quantum hardware provides an advantage.

Better hardware and software

IBM's early efforts in the quantum computing space saw it ramp up the qubit count rapidly, being one of the first companies to reach the 1,000 qubit count. However, each of those qubits had an error rate that ensured that any algorithms that tried to use all of these qubits in a single calculation would inevitably trigger one. Since then, the company's focus has been on improving the performance of smaller processors. Wednesday's announcement was based on the introduction of the second version of its Heron processor, which has 156 qubits (up from an earlier 133 in Revision 1). That's still beyond the capability of simulations on classical computers, should it be able to operate with sufficiently low errors.

Read full article

Comments

© IBM Research

What did the snowball Earth look like?

By now, it has been firmly established that the Earth went through a series of global glaciations around 600 million to 700 million years ago, shortly before complex animal life exploded in the Cambrian. Climate models have confirmed that, once enough of a dark ocean is covered by reflective ice, it sets off a cooling feedback that turns the entire planet into an icehouse. And we've found glacial material that was deposited off the coasts in the tropics.

We have an extremely incomplete picture of what these snowball periods looked like, and Antarctic terrain provides different models for what an icehouse continent might look like. But now, researchers have found deposits that they argue were formed beneath a massive ice sheet that was being melted from below by volcanic activity. And, although the deposits are currently in Colorado's Front Range, at the time they resided much closer to the equator.

In the icehouse

Glacial deposits can be difficult to identify in deep time. Massive sheets of ice will scour the terrain down to bare rock, leaving behind loosely consolidated bits of rubble that can easily be swept away after the ice is gone. We can spot when that rubble shows up in ocean deposits to confirm there were glaciers along the coast, but rubble can be difficult to find on land.

Read full article

Comments

© MARK GARLICK/SCIENCE PHOTO LIBRARY

Researchers spot black hole feeding at 40x its theoretical limit

How did supermassive black holes end up at the center of every galaxy? A while back, it wasn't that hard to explain: That's where the highest concentration of matter is, and the black holes had billions of years to feed on it. But as we've looked ever deeper into the Universe's history, we keep finding supermassive black holes, which shortens the timeline for their formation. Rather than making a leisurely meal of nearby matter, these black holes have gorged themselves in a feeding frenzy.

With the advent of the Webb Space Telescope, the problem has pushed up against theoretical limits. The matter falling into a black hole generates radiation, with faster feeding meaning more radiation. And that radiation can drive off nearby matter, choking off the black hole's food supply. That sets a limit on how fast black holes can grow unless matter is somehow fed directly into them. The Webb was used to identify early supermassive black holes that needed to have been pushing against the limit for their entire existence.

But the Webb may have just identified a solution to the dilemma as well. It has spotted a black hole that appears to have been feeding at 40 times the theoretical limit for millions of years, allowing growth at a pace sufficient to build a supermassive black hole.

Read full article

Comments

© NOIRLab/NSF/AURA/J. da Silva/M. Zamani

RFK Jr. claims Trump promised to put him in charge of NIH, CDC, and more

Earlier this week, Robert F. Kennedy, Jr. used a Zoom call to tell his supporters that Donald Trump had promised him "control" of the Department of Health and Human Services (HHS), the federal agency that includes the Centers for Disease Control, Food and Drug Administration, National Institutes of Health, as well as the Department of Agriculture. Given Kennedy's support for debunked anti-vaccine nonsense, this represents a potential public health nightmare.

A few days after, Howard Lutnick, a co-chair of Trump's transition team, appeared on CNN to deny that RFK Jr. would be put in charge of HHS. But he followed that with a long rant in which he echoed Kennedy's spurious claims about vaccines. This provides yet another indication of how anti-vaccine activism has become deeply enmeshed with Republican politics, to the point where it may be just as bad even if Kennedy isn't appointed.

Trump as Kennedy’s route to power

Kennedy has a long history of misinformation regarding health, with a special focus on vaccines. This includes the extensively debunked suggestion that there is a correlation between vaccinations and autism incidence, and it extends to a general skepticism about vaccine safety. That's mixed with conspiracy theories regarding collusion between federal regulators and pharmaceutical companies.

Read full article

Comments

© Anna Moneymaker / Staff

A how-to for ethical geoengineering research

Over the Northern Hemisphere's summer, the world's temperatures hovered near 1.5° C above pre-industrial temperatures, and the catastrophic weather events that ensued provided a preview of what might be expected to be the new normal before mid-century. And the warming won't stop there; our current emissions trajectory is such that we will double that temperature increase by the time the century is out and continue beyond its end.

This frightening trajectory and its results have led many people to argue that some form of geoengineering is necessary. If we know the effects of that much warming will be catastrophic, why not try canceling some of it out? Unfortunately, the list of "why nots" includes the fact that we don't know how well some of these techniques work or fully understand their unintended consequences. This means more research is required before we put them into practice.

But how do we do that research if there's the risk of unintended consequences? To help guide the process, the American Geophysical Union (AGU) has just released guidelines for ensuring that geoengineering research is conducted ethically.

Read full article

Comments

© Handout / Getty Images

With four more years like 2023, carbon emissions will blow past 1.5° limit

On Thursday, the United Nations' Environmental Programme (UNEP) released a report on what it terms the "emissions gap"—the difference between where we're heading and where we'd need to be to achieve the goals set out in the Paris Agreement. It makes for some pretty grim reading. Given last year's greenhouse gas emissions, we can afford fewer than four similar years before we would exceed the total emissions compatible with limiting the planet's warming to 1.5° C above pre-industrial conditions. Following existing policies out to the turn of the century would leave us facing over 3° C of warming.

The report ascribes this situation to two distinct emissions gaps: between the goals of the Paris Agreement and what countries have pledged to do and between their pledges and the policies they've actually put in place. There are some reasons to think that rapid progress could be made—the six largest greenhouse gas emitters accounted for nearly two-thirds of the global emissions, so it wouldn't take many policy changes to make a big difference. And the report suggests increased deployment of wind and solar could handle over a quarter of the needed emissions reductions.

But so far, progress has been far too limited to cut into global emissions.

Read full article

Comments

© Mario Tama

De-extinction company provides a progress report on thylacine efforts

Colossal, the company founded to try to restore the mammoth to the Arctic tundra, has also decided to tackle a number of other species that have gone extinct relatively recently: the dodo and the thylacine. Because of significant differences in biology, not the least of which is the generation time of Proboscideans, these other efforts may reach many critical milestones well in advance of the work on mammoths.

Late last week, Colossal released a progress report on the work involved in resurrecting the thylacine, also known as the Tasmanian tiger, which went extinct when the last known survivor died in a zoo in 1936. Marsupial biology has some features that may make de-extinction somewhat easier, but we have far less sophisticated ways of manipulating it compared to the technology we've developed for working with the stem cells and reproduction of placental mammals. But, based on these new announcements, the technology available for working with marsupials is expanding rapidly.

Cane toad resistance

Colossal has branched out from its original de-extinction mission to include efforts to keep species from ever needing its services. In the case of marsupial predators, the de-extinction effort is incorporating work that will benefit existing marsupial predators: generating resistance to the toxins found on the cane toad, an invasive species that has spread widely across Australia.

Read full article

Comments

© Universal History Archive

Simple voltage pulse can restore capacity to Li-Si batteries

If you're using a large battery for a specialized purpose—say grid-scale storage or an electric vehicle—then it's possible to tweak the battery chemistry, provide a little bit of excess capacity, and carefully manage its charging and discharging so that it enjoys a long life span. But for consumer electronics, the batteries are smaller, the need for light weight dictates the chemistry, and the demand for quick charging can be higher. So most batteries in our gadgets start to see serious degradation after just a couple of years of use.

A big contributor to that is an internal fragmentation of the electrode materials. This leaves some of the electrode material disconnected from the battery's charge handling system, essentially stranding the material inside the battery and trapping some of the lithium uselessly. Now, researchers have found that, for at least one battery chemistry, it's possible to partially reverse some of this decay, boosting the remaining capacity of the battery by up to 30 percent.

The only problem is that not many batteries use the specific chemistry tested here. But it does show how understanding what's going on inside batteries can provide us with ways to extend their lifespan.

Read full article

Comments

© da-kuk

Amazon joins Google in investing in small modular nuclear power

On Tuesday, Google announced that it had made a power purchase agreement for electricity generated by a small modular nuclear reactor design that hasn't even received regulatory approval yet. Today, it's Amazon's turn. The company's Amazon Web Services (AWS) group has announced three different investments, including one targeting a different startup that has its own design for small, modular nuclear reactors—one that has not yet received regulatory approval.

Unlike Google's deal, which is a commitment to purchase power should the reactors ever be completed, Amazon will lay out some money upfront as part of the agreements. We'll take a look at the deals and technology that Amazon is backing before analyzing why companies are taking a risk on unproven technologies.

Money for utilities and a startup

Two of Amazon's deals are with utilities that serve areas where it already has a significant data center footprint. One of these is Energy Northwest, which is an energy supplier that sends power to utilities in the Pacific Northwest. Amazon is putting up the money for Energy Northwest to study the feasibility of adding small modular reactors to its Columbia Generating Station, which currently houses a single, large reactor. In return, Amazon will get the right to purchase power from an initial installation of four small modular reactors. The site could potentially support additional reactors, which Energy Northwest would be able to use to meet demands from other users.

Read full article

Comments

© X-energy

People think they already know everything they need to make decisions

The world is full of people who have excessive confidence in their own abilities. This is famously described as the Dunning-Kruger effect, which describes how people who lack expertise in something will necessarily lack the knowledge needed to recognize their own limits. Now, a different set of researchers has come out with what might be viewed as a corollary to Dunning-Kruger: People have a strong tendency to believe that they always have enough data to make an informed decision—regardless of what information they actually have.

The work, done by Hunter Gehlbach, Carly Robinson, and Angus Fletcher, is based on an experiment in which they intentionally gave people only partial, biased information, finding that people never seemed to consider they might only have a partial picture. "Because people assume they have adequate information, they enter judgment and decision-making processes with less humility and more confidence than they might if they were worrying whether they knew the whole story or not," they write. The good news? When given the full picture, most people are willing to change their opinions.

Ignorant but confident

The basic setup of the experiment is very straightforward. The researchers developed a scenario where an ongoing water shortage was forcing a school district to consider closing one of its schools and merging its students into another existing school. They then wrote an article that described the situation and contained seven different pieces of information: three that favored merging, three that disfavored it, and one that was neutral. Just over half of the control group that read the full article favored merging the two schools.

Read full article

Comments

© LUDOVIC MARIN

Climate change boosted Milton’s landfall strength from Category 2 to 3

As attempts to clean up after Hurricane Milton are beginning, scientists at the World Weather Attribution project have taken a quick look at whether climate change contributed to its destructive power. While the analysis is limited by the fact that not all the meteorological data is even available yet, by several measures, climate change made aspects of Milton significantly more likely.

This isn't a huge surprise, given that Milton traveled across the same exceptionally warm Gulf of Mexico that Helene had recently transited. But the analysis does produce one striking result: Milton would have been a Category 2 storm at landfall if climate change weren't boosting its strength.

From the oceans to the skies

Hurricanes strengthen while over warm ocean waters, and climate change has been slowly cranking up the heat content of the oceans. But it's important to recognize that the slow warming is an average, and that can include some localized extreme events. This year has seen lots of ocean temperature records set in the Atlantic basin, and that seems to be true in the Gulf of Mexico as well. The researchers note that a different rapid analysis released earlier this week showed that the ocean temperatures—which had boosted Milton to a Category 5 storm during its time in the Gulf—were between 400 and 800 times more likely to exist thanks to climate change.

Read full article

Comments

© Frank Ramspott

Rapid analysis finds climate change’s fingerprint on Hurricane Helene

Hurricane Helene crossed the Gulf of Mexico at a time when sea surface temperatures were at record highs and then barreled into a region where heavy rains had left the ground saturated. The result was historic, catastrophic flooding.

One key question is how soon we might expect history to repeat itself. Our rapidly warming planet has tilted the odds in favor of some extreme weather events in a way that means we can expect some events that had been extremely rare to start occurring with some regularity. Our first stab at understanding climate change's influence on Helene was released on Wednesday, and it suggests that rainfall of the sort experienced by the Carolinas may now be a once-in-70-year event, which could have implications for how we rebuild some of the communities shattered by the rain.

Rapid attribution

The quick analysis was done by the World Weather Attribution project, which has developed peer-reviewed methods of looking for the fingerprints of climate change in major weather events. In general, this involves identifying the key weather patterns that produced the event and then exploring their frequency using climate models run with and without the carbon dioxide we've added to the atmosphere.

Read full article

Comments

© Frank Ramspott

Google identifies low noise “phase transition” in its quantum processor

Back in 2019, Google made waves by claiming it had achieved what has been called "quantum supremacy"—the ability of a quantum computer to perform operations that would take a wildly impractical amount of time to simulate on standard computing hardware. That claim proved to be controversial, in that the operations were little more than a benchmark that involved getting the quantum computer to behave like a quantum computer; separately, improved ideas about how to perform the simulation on a supercomputer cut the time required down significantly.

But Google is back with a new exploration of the benchmark, described in a paper published in Nature on Wednesday. It uses the benchmark to identify what it calls a phase transition in the performance of its quantum processor and uses it to identify conditions where the processor can operate with low noise. Taking advantage of that, they again show that, even giving classical hardware every potential advantage, it would take a supercomputer a dozen years to simulate things.

Cross entropy benchmarking

The benchmark in question involves the performance of what are called quantum random circuits, which involves performing a set of operations on qubits and letting the state of the system evolve over time, so that the output depends heavily on the stochastic nature of measurement outcomes in quantum mechanics. Each qubit will have a probability of producing one of two results, but unless that probability is one, there's no way of knowing which of the results you'll actually get. As a result, the output of the operations will be a string of truly random bits.

Read full article

Comments

© Google

Protein structure and design software gets the Chemistry Nobel

On Wednesday, the Nobel Committee announced that it had awarded the Nobel Prize in chemistry to researchers who pioneered major breakthroughs in computational chemistry. These include two researchers at Google's DeepMind in acknowledgment of their role in developing AI software that could take a raw protein sequence and use it to predict the three-dimensional structure the protein would adopt in cells. Separately, the University of Washington's David Baker was honored for developing software that could design entirely new proteins with specific structures.

The award makes for a bit of a theme for this year, as yesterday's Physics prize honored AI developments. In that case, the connection to physics seemed a bit tenuous, but here, there should be little question that the developments solved major problems in biochemistry.

Understanding protein structure

DeepMind, represented by Demis Hassabis and John Jumper, had developed AIs that managed to master games as diverse as chess and StarCraft. But it was always working on more significant problems in parallel, and in 2020, it surprised many people by announcing that it had tackled one of the biggest computational challenges in existence: the prediction of protein structures.

Read full article

Comments

© Johan Jarnestad/The Royal Swedish Academy of Science

Medicine Nobel goes to previously unknown way of controlling genes

On Monday, the Nobel Committee announced that two US researchers, Victor Ambros and Gary Ruvkun, will receive the prize in Physiology or Medicine for their discovery of a previously unknown mechanism for controlling the activity of genes. They discovered the first of what is now known to be a large collection of MicroRNAs, short (21-23 bases long) RNAs that bind to and alter the behavior of protein-coding RNAs. While first discovered in a roundworm, they've since been discovered to play key roles in the development of most complex life.

The story behind the discovery is typical of a lot of the progress in the biological sciences: genetics helps identify a gene important for the development of one species, and then evolutionary conservation reveals its widespread significance.

In the worm

Ambros and Ruvkun started on the path to discovery while post-doctoral fellows in the lab of earlier Nobel winner Robert Horvitz, who won for his role in developing the roundworm C. elegans as an experimental genetic organism. As part of the early genetic screens, people had identified a variety of mutations that caused developmental problems for specific lineages of cells. These lin mutations included lin-4, which Ambros was characterizing. It lacked a number of specialized cell types, as well as the physical structures that depended on them.

Read full article

Comments

© HeitiPaves

Ants learned to farm fungi during a mass extinction

We tend to think of agriculture as a human innovation. But insects beat us to it by millions of years. Various ant species cooperate with fungi, creating a home for them, providing them with nutrients, and harvesting them as food. This reaches the peak of sophistication in the leafcutter ants, which cut foliage and return it to feed their fungi, which in turn form specialized growths that are harvested for food. But other ant species cooperate with fungi—in some cases strains of fungus that are also found growing in their environment.

Genetic studies have shown that these symbiotic relationships are highly specific—a given ant species will often cooperate with just a single strain of fungus. A number of genes that appear to have evolved rapidly in response to strains of fungi take part in this cooperative relationship. But it has been less clear how the cooperation originally came about, partly because we don't have a good picture of what the undomesticated relatives of these fungi look like.

Now, a large international team of researchers has done a study that traces the relationships among a large collection of both fungi and ants, providing a clearer picture of how this form of agriculture evolved. And the history this study reveals suggests that the cooperation between ants and their crops began after the mass extinction that killed the dinosaurs, when little beyond fungi could thrive.

Read full article

Comments

© pxhidalgo

For the first time since 1882, UK will have no coal-fired power plants

On Monday, the UK will see the closure of its last operational coal power plant, Ratcliffe-on-Soar, which has been operating since 1968. The closure of the plant, which had a capacity of 2,000 megawatts, will bring an end to the history of the country's coal use, which started with the opening of the first coal-fired power station in 1882. Coal played a central part in the UK's power system in the interim, in some years providing over 90 percent of its total electricity.

But a number of factors combined to place coal in a long-term decline: the growth of natural gas-powered plants and renewables, pollution controls, carbon pricing, and a government goal to hit net-zero greenhouse gas emissions by 2050.

From boom to bust

It's difficult to overstate the importance of coal to the UK grid. It was providing over 90 percent of the UK's electricity as recently as 1956. The total amount of power generated continued to climb well after that, reaching a peak of 212 terawatt hours of production by 1980. And the construction of new coal plants was under consideration as recently as the late 2000s. According to the organization Carbon Brief's excellent timeline of coal use in the UK, continuing the use of coal with carbon capture was given consideration.

Read full article

Comments

© Ashley Cooper

❌