Normal view
You can buy a diamond-making machine for $200,000 on Alibaba
In an age when you can get just about anything online, it's probably no surprise that you can buy a diamond-making machine for $200,000 on Chinese eCommerce site Alibaba. If, like me, you haven't been paying attention to the diamond industry, it turns out that the availability of these machines reflects an ongoing trend toward democratizing diamond production—a process that began decades ago and continues to evolve.
The history of lab-grown diamonds dates back at least half a century. According to Harvard graduate student Javid Lakha, writing in a comprehensive piece on lab-grown diamonds published in Works in Progress last month, the first successful synthesis of diamonds in a laboratory setting occurred in the 1950s. Lakha recounts how Howard Tracy Hall, a chemist at General Electric, created the first lab-grown diamonds using a high-pressure, high-temperature (HPHT) process that mimicked the conditions under which diamonds form in nature.
Since then, diamond-making technology has advanced significantly. Today, there are two primary methods for creating lab-grown diamonds: the HPHT process and chemical vapor deposition (CVD). Both types of machines are now listed on Alibaba, with prices starting at around $200,000, as pointed out in a Hacker News comment by engineer John Nagle (who goes by "Animats" on Hacker News). A CVD machine we found is more pricey, at around $450,000.
The Weather Gods Who Want Us to Believe They Can Make Rain on Demand
This story was originally published by Wired and is reproduced here as part of the Climate Desk collaboration.
In the skies over Al Ain, in the United Arab Emirates, pilot Mark Newman waits for the signal. When it comes, he flicks a few silver switches on a panel by his leg, twists two black dials, then punches a red button labeled FIRE.
A slender canister mounted on the wing of his small propeller plane pops open, releasing a plume of fine white dust. That dust—actually ordinary table salt coated in a nanoscale layer of titanium oxide—will be carried aloft on updrafts of warm air, bearing it into the heart of the fluffy convective clouds that form in this part of the UAE, where the many-shaded sands of Abu Dhabi meet the mountains on the border with Oman. It will, in theory at least, attract water molecules, forming small droplets that will collide and coalesce with other droplets until they grow big enough for gravity to pull them out of the sky as rain.
This is cloud seeding. It’s one of hundreds of missions that Newman and his fellow pilots will fly this year as part of the UAE’s ambitious, decade-long attempt to increase rainfall in its desert lands. Sitting next to him in the copilot’s seat, I can see red earth stretching to the horizon. The only water in sight is the swimming pool of a luxury hotel, perched on the side of a mountain below a sheikh’s palace, shimmering like a jewel.
More than 50 countries have dabbled in cloud seeding since the 1940s—to slake droughts, refill hydroelectric reservoirs, keep ski slopes snowy, or even use as a weapon of war. In recent years there’s been a new surge of interest, partly due to scientific breakthroughs, but also because arid countries are facing down the early impacts of climate change.
Like other technologies designed to treat the symptoms of a warming planet (say, pumping sulfur dioxide into the atmosphere to reflect sunlight into space), seeding was once controversial but now looks attractive, perhaps even imperative. Dry spells are getting longer and more severe: In Spain and southern Africa, crops are withering in the fields, and cities from Bogotá to Cape Town have been forced to ration water. In the past nine months alone, seeding has been touted as a solution to air pollution in Pakistan, as a way to prevent forest fires in Indonesia, and as part of an effort to refill the Panama Canal, which is drying up.
Apart from China, which keeps its extensive seeding operations a closely guarded secret, the UAE has been more ambitious than any other country about advancing the science of making rain. The nation gets around 5 to 7 inches of rain a year—roughly half the amount that falls on Nevada, America’s driest state. The UAE started its cloud-seeding program in the early 2000s, and since 2015 it has invested millions of dollars in the Rain Enhancement Program, which is funding global research into new technologies.
This past April, when a storm dumped a year’s worth of rain on the UAE in 24 hours, the widespread flooding in Dubai was quickly blamed on cloud seeding. But the truth is more nebulous. There’s a long history of people—tribal chiefs, traveling con artists, military scientists, and most recently VC-backed techies—claiming to be able to make it rain on demand. But cloud seeding can’t make clouds appear out of thin air; it can only squeeze more rain out of what’s already in the sky. Scientists still aren’t sure they can make it work reliably on a mass scale. The Dubai flood was more likely the result of a region-wide storm system, exacerbated by climate change and the lack of suitable drainage systems in the city.
The Rain Enhancement Program’s stated goal is to ensure that future generations, not only in the UAE but in arid regions around the globe, have the water they need to survive. The architects of the program argue that “water security is an essential element of national security” and that their country is “leading the way” in “new technologies” and “resource conservation.” But the UAE—synonymous with luxury living and conspicuous consumption—has one of the highest per capita rates of water use on earth. So is it really on a mission to make the hotter, drier future that’s coming more livable for everyone? Or is this tiny petro-state, whose outsize wealth and political power came from helping to feed the industrialized world’s fossil-fuel addiction, looking to accrue yet more wealth and power by selling the dream of a cure?
I’ve come here on a mission of my own: to find out whether this new wave of cloud seeding is the first step toward a world where we really can control the weather, or another round of literal vaporware.
The first systematic attempts at rainmaking date back to August 5, 1891, when a train pulled into Midland, Texas, carrying 8 tons of sulfuric acid, 7 tons of cast iron, half a ton of manganese oxide, half a dozen scientists, and several veterans of the US Civil War, including General Edward Powers, a civil engineer from Chicago, and Major Robert George Dyrenforth, a former patent lawyer.
Powers had noticed that it seemed to rain more in the days after battles, and had come to believe that the “concussions” of artillery fire during combat caused air currents in the upper atmosphere to mix together and release moisture. He figured he could make his own rain on demand with loud noises, either by arranging hundreds of cannons in a circle and pointing them at the sky or by sending up balloons loaded with explosives. His ideas, which he laid out in a book called War and the Weather and lobbied for for years, eventually prompted the US federal government to bankroll the experiment in Midland.
Powers and Dyrenforth’s team assembled at a local cattle ranch and prepared for an all-out assault on the sky. They made mortars from lengths of pipe, stuffed dynamite into prairie dog holes, and draped bushes in rackarock, an explosive used in the coal-mining industry. They built kites charged with electricity and filled balloons with a combination of hydrogen and oxygen, which Dyrenforth thought would fuse into water when it exploded. (Skeptics pointed out that it would have been easier and cheaper to just tie a jug of water to the balloon.)
The group was beset by technical difficulties; at one point, a furnace caught fire and had to be lassoed by a cowboy and dragged to a water tank to be extinguished. By the time they finished setting up their experiment, it had already started raining naturally. Still, they pressed on, unleashing a barrage of explosions on the night of August 17 and claiming victory when rain again fell 12 hours later.
It was questionable how much credit they could take. They had arrived in Texas right at the start of the rainy season, and the precipitation that fell before the experiment had been forecast by the US Weather Bureau. As for Powers’ notion that rain came after battles—well, battles tended to start in dry weather, so it was only the natural cycle of things that wet weather often followed.
Despite skepticism from serious scientists and ridicule in parts of the press, the Midland experiments lit the fuse on half a century of rainmaking pseudoscience. The Weather Bureau soon found itself in a running media battle to debunk the efforts of the self-styled rainmakers who started operating across the country.
The most famous of these was Charles Hatfield, nicknamed either the Moisture Accelerator or the Ponzi of the Skies, depending on whom you asked. Originally a sewing machine salesman from California, he reinvented himself as a weather guru and struck dozens of deals with desperate towns. When he arrived in a new place, he’d build a series of wooden towers, mix up a secret blend of 23 cask-aged chemicals, and pour it into vats on top of the towers to evaporate into the sky. Hatfield’s methods had the air of witchcraft, but he had a knack for playing the odds. In Los Angeles, he promised 18 inches of rain between mid-December and late April, when historical rainfall records suggested a 50 percent chance of that happening anyway.
While these showmen and charlatans were filling their pocketbooks, scientists were slowly figuring out what actually made it rain—something called cloud condensation nuclei. Even on a clear day, the skies are packed with particles, some no bigger than a grain of pollen or a viral strand. “Every cloud droplet in Earth’s atmosphere formed on a preexisting aerosol particle,” one cloud physicist told me. The types of particles vary by place. In the UAE, they include a complex mix of sulfate-rich sands from the desert of the Empty Quarter, salt spray from the Persian Gulf, chemicals from the oil refineries that dot the region, and organic materials from as far afield as India. Without them there would be no clouds at all—no rain, no snow, no hail.
A lot of raindrops start as airborne ice crystals, which melt as they fall to earth. But without cloud condensation nuclei, even ice crystals won’t form until the temperature dips below -40 degrees Fahrenheit. As a result, the atmosphere is full of pockets of supercooled liquid water that’s below freezing but hasn’t actually turned into ice.
In 1938, a meteorologist in Germany suggested that seeding these areas of frigid water with artificial cloud condensation nuclei might encourage the formation of ice crystals, which would quickly grow large enough to fall, first as snowflakes, then as rain. After the Second World War, American scientists at General Electric seized on the idea. One group, led by chemists Vincent Schaefer and Irving Langmuir, found that solid carbon dioxide, also known as dry ice, would do the trick. When Schaefer dropped grains of dry ice into the home freezer he’d been using as a makeshift cloud chamber, he discovered that water readily freezes around the particles’ crystalline structure. When he witnessed the effect a week later, Langmuir jotted down three words in his notebook: “Control of Weather.” Within a few months, they were dropping dry-ice pellets from planes over Mount Greylock in Western Massachusetts, creating a 3-mile-long streak of ice and snow.
Another GE scientist, Bernard Vonnegut, had settled on a different seeding material: silver iodide. It has a structure remarkably similar to an ice crystal and can be used for seeding at a wider range of temperatures. (Vonnegut’s brother, Kurt, who was working as a publicist at GE at the time, would go on to write Cat’s Cradle, a book about a seeding material called ice-nine that causes all the water on earth to freeze at once.)
In the wake of these successes, GE was bombarded with requests: Winter carnivals and movie studios wanted artificial snow; others wanted clear skies for search and rescue. Then, in February 1947, everything went quiet. The company’s scientists were ordered to stop talking about cloud seeding publicly and direct their efforts toward a classified US military program called Project Cirrus.
Over the next five years, Project Cirrus conducted more than 250 cloud-seeding experiments as the United States and other countries explored ways to weaponize the weather. Schaefer was part of a team that dropped 80 pounds of dry ice into the heart of Hurricane King, which had torn through Miami in the fall of 1947 and was heading out to sea. Following the operation, the storm made a sharp turn back toward land and smashed into the coast of Georgia, where it caused one death and millions of dollars in damages. In 1963, Fidel Castro reportedly accused the Americans of seeding Hurricane Flora, which hung over Cuba for four days, resulting in thousands of deaths. During the Vietnam War, the US Army used cloud seeding to try to soften the ground and make it impassable for enemy soldiers.
A couple of years after that war ended, more than 30 countries, including the US and the USSR, signed the Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques. By then, interest in cloud seeding had started to melt away anyway, first among militaries, then in the civilian sector. “We didn’t really have the tools—the numerical models and also the observations—to really prove it,” says Katja Friedrich, who researches cloud physics at the University of Colorado. (This didn’t stop the USSR from seeding clouds near the site of the nuclear meltdown at Chernobyl in hopes that they would dump their radioactive contents over Belarus rather than Moscow.)
To really put seeding on a sound scientific footing, they needed to get a better understanding of rain at all scales, from the microphysical science of nucleation right up to the global movement of air currents. At the time, scientists couldn’t do the three things that were required to make the technology viable: identify target areas of supercooled liquid in clouds, deliver the seeding material into those clouds, and verify that it was actually doing what they thought. How could you tell whether a cloud dropped snow because of seeding, or if it would have snowed anyway?
By 2017, armed with new, more powerful computers running the latest generation of simulation software, researchers in the US were finally ready to answer that question, via the Snowie project. Like the GE chemists years earlier, these experimenters dropped silver iodide from planes. The experiments took place in the Rocky Mountains, where prevailing winter winds blow moisture up the slopes, leading to clouds reliably forming at the same time each day.
The results were impressive: The researchers could draw an extra 100 to 300 acre-feet of snow from each storm they seeded. But the most compelling evidence was anecdotal. As the plane flew back and forth at an angle to the prevailing wind, it sprayed a zigzag pattern of seeding material across the sky. That was echoed by a zigzag pattern of snow on the weather radar. “Mother Nature does not produce zigzag patterns,” says one scientist who worked on Snowie.
In almost a century of cloud seeding, it was the first time anyone had actually shown the full chain of events from seeding through to precipitation reaching the ground.
The UAE’s national Center of Meteorology is a glass cube rising out of featureless scrubland, ringed by a tangle of dusty highways on the edge of Abu Dhabi. Inside, I meet Ahmad Al Kamali, the facility’s rain operations executor—a trim young man with a neat beard and dark-framed glasses. He studied at the University of Reading in the UK and worked as a forecaster before specializing in cloud-seeding operations. Like all the Emirati men I meet on this trip, he’s wearing a kandura—a loose white robe with a headpiece secured by a loop of thick black cord.
We take the elevator to the third floor, where I find cloud-seeding mission control. With gold detailing and a marble floor, it feels like a luxury hotel lobby, except for the giant radar map of the Gulf that fills one wall. Forecasters—men in white, women in black—sit at banks of desks and scour satellite images and radar data looking for clouds to seed. Near the entrance there’s a small glass pyramid on a pedestal, about a foot wide at its base. It’s a holographic projector. When Al Kamali switches it on, a tiny animated cloud appears inside. A plane circles it, and rain begins to fall. I start to wonder: How much of this is theater?
The impetus for cloud seeding in the UAE came in the early 2000s, when the country was in the middle of a construction boom. Dubai and Abu Dhabi were a sea of cranes; the population had more than doubled in the previous decade as expats flocked there to take advantage of the good weather and low income taxes. Sheikh Mansour bin Zayed Al Nahyan, a member of Abu Dhabi’s royal family—currently both vice president and deputy prime minister of the UAE—thought cloud seeding, along with desalination of seawater, could help replenish the country’s groundwater and refill its reservoirs. (Globally, Mansour is perhaps best known as the owner of the soccer club Manchester City.) As the Emiratis were setting up their program, they called in some experts from another arid country for help.
Back in 1989, a team of researchers in South Africa were studying how to enhance the formation of raindrops. They were taking cloud measurements in the east of the country when they spotted a cumulus cloud that was raining when all the other clouds in the area were dry. When they sent a plane into the cloud to get samples, they found a much wider range of droplet sizes than in the other clouds—some as big as half a centimeter in diameter.
The finding underscored that it’s not only the number of droplets in a cloud that matters but also the size. A cloud of droplets that are all the same size won’t mix together because they’re all falling at the same speed. But if you can introduce larger drops, they’ll plummet to earth faster, colliding and coalescing with other droplets, forming even bigger drops that have enough mass to leave the cloud and become rain. The South African researchers discovered that although clouds in semiarid areas of the country contain hundreds of water droplets in every cubic centimeter of air, they’re less efficient at creating rain than maritime clouds, which have about a sixth as many droplets but more variation in droplet size.
So why did this one cloud have bigger droplets? It turned out that the chimney of a nearby paper mill was pumping out particles of debris that attracted water. Over the next few years, the South African researchers ran long-term studies looking for the best way to re-create the effect of the paper mill on demand. They settled on ordinary salt—the most hygroscopic substance they could find. Then they developed flares that would release a steady stream of salt crystals when ignited.
Those flares were the progenitors of what the Emiratis use today, made locally at the Weather Modification Technology Factory. Al Kamali shows me a couple: They’re foot-long tubes a couple of inches in diameter, each holding a kilogram of seeding material. One type of flare holds a mixture of salts. The other type holds salts coated in a nano layer of titanium dioxide, which attracts more water in drier climates. The Emiratis call them Ghaith 1 and Ghaith 2, ghaith being one of the Arabic words for “rain.” Although the language has another near synonym, matar, it has negative connotations—rain as punishment, torment, the rain that breaks the banks and floods the fields. Ghaith, on the other hand, is rain as mercy and prosperity, the deluge that ends the drought.
The morning after my visit to the National Center of Meteorology, I take a taxi to Al Ain to go on that cloud-seeding flight. But there’s a problem. When I leave Abu Dhabi that morning there’s a low fog settled across the country, but by the time I arrive at Al Ain’s small airport—about 100 miles inland from the cities on the coast—it has burned away, leaving clear blue skies. There are no clouds to seed.
Once I’ve cleared the tight security cordon and reached the gold-painted hangar (the airport is also used for military training flights), I meet Newman, who agrees to take me up anyway so he can demonstrate what would happen on a real mission. He’s wearing a blue cap with the UAE Rain Enhancement Program logo on it. Before moving to the UAE with his family 11 years ago, Newman worked as a commercial airline pilot on passenger jets and split his time between the UK and his native South Africa. He has exactly the kind of firmly reassuring presence you want from someone you’re about to climb into a small plane with.
Every cloud-seeding mission starts with a weather forecast. A team of six operators at the meteorology center scour satellite images and data from the UAE’s network of radars and weather stations and identify areas where clouds are likely to form. Often, that’s in the area around Al Ain, where the mountains on the border with Oman act as a natural barrier to moisture coming in from the sea.
If it’s looking like rain, the cloud-seeding operators radio the hangar and put some of the nine pilots on standby mode—either at home, on what Newman calls “villa standby,” or at the airport or in a holding pattern in the air. As clouds start to form, they begin to appear on the weather radar, changing color from green through blue to yellow and then red as the droplets get bigger and the reflectivity of the clouds increases.
Once a mission is approved, the pilot scribbles out a flight plan while the ground crew preps one of the four modified Beechcraft King Air C90 planes. There are 24 flares attached to each wing—half Ghaith 1, half Ghaith 2—for a total of 48 kilograms of seeding material on each flight. Timing is important, Newman tells me as we taxi toward the runway. The pilots need to reach the cloud at the optimal moment.
Once we’re airborne, Newman climbs to 6,000 feet. Then, like a falcon riding the thermals, he goes hunting for updrafts. Cloud seeding is a mentally challenging and sometimes dangerous job, he says through the headset, over the roar of the engines. Real missions last up to three hours and can get pretty bumpy as the plane moves between clouds. Pilots generally try to avoid turbulence. Seeding missions seek it out.
When we get to the right altitude, Newman radios the ground for permission to set off the flares. There are no hard rules for how many flares to put into each cloud, one seeding operator told me. It depends on the strength of the updraft reported by the pilots, how things look on the radar. It sounds more like art than science.
Newman triggers one of the salt flares, and I twist in my seat to watch: It burns with a white-gray smoke. He lets me set off one of the nano-flares. It’s slightly anticlimactic: The green lid of the tube pops open and the material spills out. I’m reminded of someone sprinkling grated cheese on spaghetti.
There’s an evangelical zeal to the way some of the pilots and seeding operators talk about this stuff—the rush of hitting a button on an instrument panel and seeing the clouds burst before their eyes. Like gods. Newman shows me a video on his phone of a cloud that he’d just seeded hurling fat drops of rain onto the plane’s front windows. Operators swear they can see clouds changing on the radar.
But the jury is out on how effective hygroscopic seeding actually is. The UAE has invested millions in developing new technologies for enhancing rainfall—and surprisingly little in actually verifying the impact of the seeding it’s doing right now. After initial feasibility work in the early 2000s, the next long-term analysis of the program’s effectiveness didn’t come until 2021. It found a 23 percent increase in annual rainfall in seeded areas, as compared with historical averages, but cautioned that “anomalies associated with climate variability” might affect this figure in unforeseen ways. As Friedrich notes, you can’t necessarily assume that rainfall measurements from, say, 1989 are directly comparable with those from 2019, given that climatic conditions can vary widely from year to year or decade to decade.
The best evidence for hygroscopic seeding, experts say, comes from India, where for the past 15 years the Indian Institute of Tropical Meteorology has been conducting a slow, patient study. Unlike the UAE, India uses one plane to seed and another to take measurements of the effect that has on the cloud. In hundreds of seeding missions, researchers found an 18 percent uptick in raindrop formation inside the cloud. But the thing is, every time you want to try to make it rain in a new place, you need to prove that it works in that area, in those particular conditions, with whatever unique mix of aerosol particles might be present. What succeeds in, say, the Western Ghats mountain range is not even applicable to other areas of India, the lead researcher tells me, let alone other parts of the world.
If the UAE wanted to reliably increase the amount of fresh water in the country, committing to more desalination would be the safer bet. In theory, cloud seeding is cheaper: According to a 2023 paper by researchers at the National Center of Meteorology, the average cost of harvestable rainfall generated by cloud seeding is between 1 and 4 cents per cubic meter, compared with around 31 cents per cubic meter of water from desalination at the Hassyan Seawater Reverse Osmosis plant. But each mission costs as much as $8,000, and there’s no guarantee that the water that falls as rain will actually end up where it’s needed.
One researcher I spoke to, who has worked on cloud-seeding research in the UAE and asked to speak on background because they still work in the industry, was critical of the quality of the UAE’s science. There was, they said, a tendency for “white lies” to proliferate; officials tell their superiors what they want to hear despite the lack of evidence. The country’s rulers already think that cloud seeding is working, this person argued, so for an official to admit otherwise now would be problematic. (The National Center of Meteorology did not comment on these claims.)
By the time I leave Al Ain, I’m starting to suspect that what goes on there is as much about optics as it is about actually enhancing rainfall. The UAE has a history of making flashy announcements about cutting-edge technology—from flying cars to 3D-printed buildings to robotic police officers—with little end product.
Now, as the world transitions away from the fossil fuels that have been the country’s lifeblood for the past 50 years, the UAE is trying to position itself as a leader on climate. Last year it hosted the annual United Nations Climate Change Conference, and the head of its National Center of Meteorology was chosen to lead the World Meteorological Organization, where he’ll help shape the global consensus that forms around cloud seeding and other forms of mass-scale climate modification. (He could not be reached for an interview.)
The UAE has even started exporting its cloud-seeding expertise. One of the pilots I spoke to had just returned from a trip to Lahore, where the Pakistani government had asked the UAE’s cloud seeders to bring rain to clear the polluted skies. It rained—but they couldn’t really take credit. “We knew it was going to rain, and we just went and seeded the rain that was going to come anyway,” he said.
From the steps of the Emirates Palace Mandarin Oriental in Abu Dhabi, the UAE certainly doesn’t seem like a country that’s running out of water. As I roll up the hotel’s long driveway on my second day in town, I can see water features and lush green grass. The sprinklers are running. I’m here for a ceremony for the fifth round of research grants being awarded by the UAE Research Program for Rain Enhancement Science. Since 2015, the program has awarded $21 million to 14 projects developing and testing ways of enhancing rainfall, and it’s about to announce the next set of recipients.
In the ornate ballroom, local officials have loosely segregated themselves by gender. I sip watermelon juice and work the room, speaking to previous award winners. There’s Linda Zou, a Chinese researcher based at Khalifa University in Abu Dhabi who developed the nano-coated seeding particles in the Ghaith 2 flares. There’s Ali Abshaev, who comes from a cloud-seeding dynasty (his father directs Russia’s Hail Suppression Research Center) and who has built a machine to spray hygroscopic material into the sky from the ground. It’s like “an upside-down jet engine,” one researcher explains.
Other projects have been looking at “terrain modification”—whether planting trees or building earthen barriers in certain locations could encourage clouds to form. Giles Harrison, from the University of Reading, is exploring whether electrical currents released into clouds can encourage raindrops to stick together. There’s also a lot of work on computer simulation. Youssef Wehbe, a UAE program officer, gives me a cagey interview about the future vision: pairs of drones, powered by artificial intelligence, one taking cloud measurements and the other printing seeding material specifically tailored for that particular cloud—on the fly, as it were.
I’m particularly taken by one of this year’s grant winners. Guillaume Matras, who worked at the French defense contractor Thales before moving to the UAE, is hoping to make it rain by shooting a giant laser into the sky. Wehbe describes this approach as “high risk.” I think he means “it may not work,” not “it could set the whole atmosphere on fire.” Either way, I’m sold.
So after my cloud-seeding flight, I get a lift to Zayed Military City, an army base between Al Ain and Abu Dhabi, to visit the secretive government-funded research lab where Matras works. They take my passport at the gate to the compound, and before I can go into the lab itself I’m asked to secure my phone in a locker that’s also a Faraday cage—completely sealed to signals going in and out.
After I put on a hairnet, a lab coat, and tinted safety goggles, Matras shows me into a lab, where I watch a remarkable thing. Inside a broad, black box the size of a small television sits an immensely powerful laser. A tech switches it on. Nothing happens. Then Matras leans forward and opens a lens, focusing the laser beam.
There’s a high-pitched but very loud buzz, like the whine of an electric motor. It is the sound of the air being ripped apart. A very fine filament, maybe half a centimeter across, appears in midair. It looks like a strand of spider’s silk, but it’s bright blue. It’s plasma—the fourth state of matter. Scale up the size of the laser and the power, and you can actually set a small part of the atmosphere on fire. Man-made lightning. Obviously my first question is to ask what would happen if I put my hand in it. “Your hand would turn into plasma,” another researcher says, entirely deadpan. I put my hand back in my pocket.
Matras says these laser beams will be able to enhance rainfall in three ways. First, acoustically—like the concussion theory of old, it’s thought that the sound of atoms in the air being ripped apart might shake adjacent raindrops so that they coalesce, get bigger, and fall to earth. Second: convection—the beam will create heat, generating updrafts that will force droplets to mix. (I’m reminded of a never-realized 1840s plan to create rain by setting fire to large chunks of the Appalachian Mountains.) Finally: ionization. When the beam is switched off, the plasma will reform—the nitrogen, hydrogen, and oxygen molecules inside will clump back together into random configurations, creating new particles for water to settle around.
The plan is to scale this technology up to something the size of a shipping container that can be put on the back of a truck and driven to where it’s needed. It seems insane—I’m suddenly very aware that I’m on a military base. Couldn’t this giant movable laser be used as a weapon? “Yes,” Matras says. He picks up a pencil, the nib honed to a sharp point. “But anything could be a weapon.”
These words hang over me as I ride back into the city, past lush golf courses and hotel fountains and workmen swigging from plastic bottles. Once again, there’s not a cloud in the sky. But maybe that doesn’t matter. For the UAE, so keen to project its technological prowess around the region and the world, it’s almost irrelevant whether cloud seeding works. There’s soft power in being seen to be able to bend the weather to your will—in 2018, an Iranian general accused the UAE and Israel of stealing his country’s rain.
Anything could be a weapon, Matras had said. But there are military weapons, and economic weapons, and cultural and political weapons too. Anything could be a weapon—even the idea of one.
The Weather Gods Who Want Us to Believe They Can Make Rain on Demand
This story was originally published by Wired and is reproduced here as part of the Climate Desk collaboration.
In the skies over Al Ain, in the United Arab Emirates, pilot Mark Newman waits for the signal. When it comes, he flicks a few silver switches on a panel by his leg, twists two black dials, then punches a red button labeled FIRE.
A slender canister mounted on the wing of his small propeller plane pops open, releasing a plume of fine white dust. That dust—actually ordinary table salt coated in a nanoscale layer of titanium oxide—will be carried aloft on updrafts of warm air, bearing it into the heart of the fluffy convective clouds that form in this part of the UAE, where the many-shaded sands of Abu Dhabi meet the mountains on the border with Oman. It will, in theory at least, attract water molecules, forming small droplets that will collide and coalesce with other droplets until they grow big enough for gravity to pull them out of the sky as rain.
This is cloud seeding. It’s one of hundreds of missions that Newman and his fellow pilots will fly this year as part of the UAE’s ambitious, decade-long attempt to increase rainfall in its desert lands. Sitting next to him in the copilot’s seat, I can see red earth stretching to the horizon. The only water in sight is the swimming pool of a luxury hotel, perched on the side of a mountain below a sheikh’s palace, shimmering like a jewel.
More than 50 countries have dabbled in cloud seeding since the 1940s—to slake droughts, refill hydroelectric reservoirs, keep ski slopes snowy, or even use as a weapon of war. In recent years there’s been a new surge of interest, partly due to scientific breakthroughs, but also because arid countries are facing down the early impacts of climate change.
Like other technologies designed to treat the symptoms of a warming planet (say, pumping sulfur dioxide into the atmosphere to reflect sunlight into space), seeding was once controversial but now looks attractive, perhaps even imperative. Dry spells are getting longer and more severe: In Spain and southern Africa, crops are withering in the fields, and cities from Bogotá to Cape Town have been forced to ration water. In the past nine months alone, seeding has been touted as a solution to air pollution in Pakistan, as a way to prevent forest fires in Indonesia, and as part of an effort to refill the Panama Canal, which is drying up.
Apart from China, which keeps its extensive seeding operations a closely guarded secret, the UAE has been more ambitious than any other country about advancing the science of making rain. The nation gets around 5 to 7 inches of rain a year—roughly half the amount that falls on Nevada, America’s driest state. The UAE started its cloud-seeding program in the early 2000s, and since 2015 it has invested millions of dollars in the Rain Enhancement Program, which is funding global research into new technologies.
This past April, when a storm dumped a year’s worth of rain on the UAE in 24 hours, the widespread flooding in Dubai was quickly blamed on cloud seeding. But the truth is more nebulous. There’s a long history of people—tribal chiefs, traveling con artists, military scientists, and most recently VC-backed techies—claiming to be able to make it rain on demand. But cloud seeding can’t make clouds appear out of thin air; it can only squeeze more rain out of what’s already in the sky. Scientists still aren’t sure they can make it work reliably on a mass scale. The Dubai flood was more likely the result of a region-wide storm system, exacerbated by climate change and the lack of suitable drainage systems in the city.
The Rain Enhancement Program’s stated goal is to ensure that future generations, not only in the UAE but in arid regions around the globe, have the water they need to survive. The architects of the program argue that “water security is an essential element of national security” and that their country is “leading the way” in “new technologies” and “resource conservation.” But the UAE—synonymous with luxury living and conspicuous consumption—has one of the highest per capita rates of water use on earth. So is it really on a mission to make the hotter, drier future that’s coming more livable for everyone? Or is this tiny petro-state, whose outsize wealth and political power came from helping to feed the industrialized world’s fossil-fuel addiction, looking to accrue yet more wealth and power by selling the dream of a cure?
I’ve come here on a mission of my own: to find out whether this new wave of cloud seeding is the first step toward a world where we really can control the weather, or another round of literal vaporware.
The first systematic attempts at rainmaking date back to August 5, 1891, when a train pulled into Midland, Texas, carrying 8 tons of sulfuric acid, 7 tons of cast iron, half a ton of manganese oxide, half a dozen scientists, and several veterans of the US Civil War, including General Edward Powers, a civil engineer from Chicago, and Major Robert George Dyrenforth, a former patent lawyer.
Powers had noticed that it seemed to rain more in the days after battles, and had come to believe that the “concussions” of artillery fire during combat caused air currents in the upper atmosphere to mix together and release moisture. He figured he could make his own rain on demand with loud noises, either by arranging hundreds of cannons in a circle and pointing them at the sky or by sending up balloons loaded with explosives. His ideas, which he laid out in a book called War and the Weather and lobbied for for years, eventually prompted the US federal government to bankroll the experiment in Midland.
Powers and Dyrenforth’s team assembled at a local cattle ranch and prepared for an all-out assault on the sky. They made mortars from lengths of pipe, stuffed dynamite into prairie dog holes, and draped bushes in rackarock, an explosive used in the coal-mining industry. They built kites charged with electricity and filled balloons with a combination of hydrogen and oxygen, which Dyrenforth thought would fuse into water when it exploded. (Skeptics pointed out that it would have been easier and cheaper to just tie a jug of water to the balloon.)
The group was beset by technical difficulties; at one point, a furnace caught fire and had to be lassoed by a cowboy and dragged to a water tank to be extinguished. By the time they finished setting up their experiment, it had already started raining naturally. Still, they pressed on, unleashing a barrage of explosions on the night of August 17 and claiming victory when rain again fell 12 hours later.
It was questionable how much credit they could take. They had arrived in Texas right at the start of the rainy season, and the precipitation that fell before the experiment had been forecast by the US Weather Bureau. As for Powers’ notion that rain came after battles—well, battles tended to start in dry weather, so it was only the natural cycle of things that wet weather often followed.
Despite skepticism from serious scientists and ridicule in parts of the press, the Midland experiments lit the fuse on half a century of rainmaking pseudoscience. The Weather Bureau soon found itself in a running media battle to debunk the efforts of the self-styled rainmakers who started operating across the country.
The most famous of these was Charles Hatfield, nicknamed either the Moisture Accelerator or the Ponzi of the Skies, depending on whom you asked. Originally a sewing machine salesman from California, he reinvented himself as a weather guru and struck dozens of deals with desperate towns. When he arrived in a new place, he’d build a series of wooden towers, mix up a secret blend of 23 cask-aged chemicals, and pour it into vats on top of the towers to evaporate into the sky. Hatfield’s methods had the air of witchcraft, but he had a knack for playing the odds. In Los Angeles, he promised 18 inches of rain between mid-December and late April, when historical rainfall records suggested a 50 percent chance of that happening anyway.
While these showmen and charlatans were filling their pocketbooks, scientists were slowly figuring out what actually made it rain—something called cloud condensation nuclei. Even on a clear day, the skies are packed with particles, some no bigger than a grain of pollen or a viral strand. “Every cloud droplet in Earth’s atmosphere formed on a preexisting aerosol particle,” one cloud physicist told me. The types of particles vary by place. In the UAE, they include a complex mix of sulfate-rich sands from the desert of the Empty Quarter, salt spray from the Persian Gulf, chemicals from the oil refineries that dot the region, and organic materials from as far afield as India. Without them there would be no clouds at all—no rain, no snow, no hail.
A lot of raindrops start as airborne ice crystals, which melt as they fall to earth. But without cloud condensation nuclei, even ice crystals won’t form until the temperature dips below -40 degrees Fahrenheit. As a result, the atmosphere is full of pockets of supercooled liquid water that’s below freezing but hasn’t actually turned into ice.
In 1938, a meteorologist in Germany suggested that seeding these areas of frigid water with artificial cloud condensation nuclei might encourage the formation of ice crystals, which would quickly grow large enough to fall, first as snowflakes, then as rain. After the Second World War, American scientists at General Electric seized on the idea. One group, led by chemists Vincent Schaefer and Irving Langmuir, found that solid carbon dioxide, also known as dry ice, would do the trick. When Schaefer dropped grains of dry ice into the home freezer he’d been using as a makeshift cloud chamber, he discovered that water readily freezes around the particles’ crystalline structure. When he witnessed the effect a week later, Langmuir jotted down three words in his notebook: “Control of Weather.” Within a few months, they were dropping dry-ice pellets from planes over Mount Greylock in Western Massachusetts, creating a 3-mile-long streak of ice and snow.
Another GE scientist, Bernard Vonnegut, had settled on a different seeding material: silver iodide. It has a structure remarkably similar to an ice crystal and can be used for seeding at a wider range of temperatures. (Vonnegut’s brother, Kurt, who was working as a publicist at GE at the time, would go on to write Cat’s Cradle, a book about a seeding material called ice-nine that causes all the water on earth to freeze at once.)
In the wake of these successes, GE was bombarded with requests: Winter carnivals and movie studios wanted artificial snow; others wanted clear skies for search and rescue. Then, in February 1947, everything went quiet. The company’s scientists were ordered to stop talking about cloud seeding publicly and direct their efforts toward a classified US military program called Project Cirrus.
Over the next five years, Project Cirrus conducted more than 250 cloud-seeding experiments as the United States and other countries explored ways to weaponize the weather. Schaefer was part of a team that dropped 80 pounds of dry ice into the heart of Hurricane King, which had torn through Miami in the fall of 1947 and was heading out to sea. Following the operation, the storm made a sharp turn back toward land and smashed into the coast of Georgia, where it caused one death and millions of dollars in damages. In 1963, Fidel Castro reportedly accused the Americans of seeding Hurricane Flora, which hung over Cuba for four days, resulting in thousands of deaths. During the Vietnam War, the US Army used cloud seeding to try to soften the ground and make it impassable for enemy soldiers.
A couple of years after that war ended, more than 30 countries, including the US and the USSR, signed the Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques. By then, interest in cloud seeding had started to melt away anyway, first among militaries, then in the civilian sector. “We didn’t really have the tools—the numerical models and also the observations—to really prove it,” says Katja Friedrich, who researches cloud physics at the University of Colorado. (This didn’t stop the USSR from seeding clouds near the site of the nuclear meltdown at Chernobyl in hopes that they would dump their radioactive contents over Belarus rather than Moscow.)
To really put seeding on a sound scientific footing, they needed to get a better understanding of rain at all scales, from the microphysical science of nucleation right up to the global movement of air currents. At the time, scientists couldn’t do the three things that were required to make the technology viable: identify target areas of supercooled liquid in clouds, deliver the seeding material into those clouds, and verify that it was actually doing what they thought. How could you tell whether a cloud dropped snow because of seeding, or if it would have snowed anyway?
By 2017, armed with new, more powerful computers running the latest generation of simulation software, researchers in the US were finally ready to answer that question, via the Snowie project. Like the GE chemists years earlier, these experimenters dropped silver iodide from planes. The experiments took place in the Rocky Mountains, where prevailing winter winds blow moisture up the slopes, leading to clouds reliably forming at the same time each day.
The results were impressive: The researchers could draw an extra 100 to 300 acre-feet of snow from each storm they seeded. But the most compelling evidence was anecdotal. As the plane flew back and forth at an angle to the prevailing wind, it sprayed a zigzag pattern of seeding material across the sky. That was echoed by a zigzag pattern of snow on the weather radar. “Mother Nature does not produce zigzag patterns,” says one scientist who worked on Snowie.
In almost a century of cloud seeding, it was the first time anyone had actually shown the full chain of events from seeding through to precipitation reaching the ground.
The UAE’s national Center of Meteorology is a glass cube rising out of featureless scrubland, ringed by a tangle of dusty highways on the edge of Abu Dhabi. Inside, I meet Ahmad Al Kamali, the facility’s rain operations executor—a trim young man with a neat beard and dark-framed glasses. He studied at the University of Reading in the UK and worked as a forecaster before specializing in cloud-seeding operations. Like all the Emirati men I meet on this trip, he’s wearing a kandura—a loose white robe with a headpiece secured by a loop of thick black cord.
We take the elevator to the third floor, where I find cloud-seeding mission control. With gold detailing and a marble floor, it feels like a luxury hotel lobby, except for the giant radar map of the Gulf that fills one wall. Forecasters—men in white, women in black—sit at banks of desks and scour satellite images and radar data looking for clouds to seed. Near the entrance there’s a small glass pyramid on a pedestal, about a foot wide at its base. It’s a holographic projector. When Al Kamali switches it on, a tiny animated cloud appears inside. A plane circles it, and rain begins to fall. I start to wonder: How much of this is theater?
The impetus for cloud seeding in the UAE came in the early 2000s, when the country was in the middle of a construction boom. Dubai and Abu Dhabi were a sea of cranes; the population had more than doubled in the previous decade as expats flocked there to take advantage of the good weather and low income taxes. Sheikh Mansour bin Zayed Al Nahyan, a member of Abu Dhabi’s royal family—currently both vice president and deputy prime minister of the UAE—thought cloud seeding, along with desalination of seawater, could help replenish the country’s groundwater and refill its reservoirs. (Globally, Mansour is perhaps best known as the owner of the soccer club Manchester City.) As the Emiratis were setting up their program, they called in some experts from another arid country for help.
Back in 1989, a team of researchers in South Africa were studying how to enhance the formation of raindrops. They were taking cloud measurements in the east of the country when they spotted a cumulus cloud that was raining when all the other clouds in the area were dry. When they sent a plane into the cloud to get samples, they found a much wider range of droplet sizes than in the other clouds—some as big as half a centimeter in diameter.
The finding underscored that it’s not only the number of droplets in a cloud that matters but also the size. A cloud of droplets that are all the same size won’t mix together because they’re all falling at the same speed. But if you can introduce larger drops, they’ll plummet to earth faster, colliding and coalescing with other droplets, forming even bigger drops that have enough mass to leave the cloud and become rain. The South African researchers discovered that although clouds in semiarid areas of the country contain hundreds of water droplets in every cubic centimeter of air, they’re less efficient at creating rain than maritime clouds, which have about a sixth as many droplets but more variation in droplet size.
So why did this one cloud have bigger droplets? It turned out that the chimney of a nearby paper mill was pumping out particles of debris that attracted water. Over the next few years, the South African researchers ran long-term studies looking for the best way to re-create the effect of the paper mill on demand. They settled on ordinary salt—the most hygroscopic substance they could find. Then they developed flares that would release a steady stream of salt crystals when ignited.
Those flares were the progenitors of what the Emiratis use today, made locally at the Weather Modification Technology Factory. Al Kamali shows me a couple: They’re foot-long tubes a couple of inches in diameter, each holding a kilogram of seeding material. One type of flare holds a mixture of salts. The other type holds salts coated in a nano layer of titanium dioxide, which attracts more water in drier climates. The Emiratis call them Ghaith 1 and Ghaith 2, ghaith being one of the Arabic words for “rain.” Although the language has another near synonym, matar, it has negative connotations—rain as punishment, torment, the rain that breaks the banks and floods the fields. Ghaith, on the other hand, is rain as mercy and prosperity, the deluge that ends the drought.
The morning after my visit to the National Center of Meteorology, I take a taxi to Al Ain to go on that cloud-seeding flight. But there’s a problem. When I leave Abu Dhabi that morning there’s a low fog settled across the country, but by the time I arrive at Al Ain’s small airport—about 100 miles inland from the cities on the coast—it has burned away, leaving clear blue skies. There are no clouds to seed.
Once I’ve cleared the tight security cordon and reached the gold-painted hangar (the airport is also used for military training flights), I meet Newman, who agrees to take me up anyway so he can demonstrate what would happen on a real mission. He’s wearing a blue cap with the UAE Rain Enhancement Program logo on it. Before moving to the UAE with his family 11 years ago, Newman worked as a commercial airline pilot on passenger jets and split his time between the UK and his native South Africa. He has exactly the kind of firmly reassuring presence you want from someone you’re about to climb into a small plane with.
Every cloud-seeding mission starts with a weather forecast. A team of six operators at the meteorology center scour satellite images and data from the UAE’s network of radars and weather stations and identify areas where clouds are likely to form. Often, that’s in the area around Al Ain, where the mountains on the border with Oman act as a natural barrier to moisture coming in from the sea.
If it’s looking like rain, the cloud-seeding operators radio the hangar and put some of the nine pilots on standby mode—either at home, on what Newman calls “villa standby,” or at the airport or in a holding pattern in the air. As clouds start to form, they begin to appear on the weather radar, changing color from green through blue to yellow and then red as the droplets get bigger and the reflectivity of the clouds increases.
Once a mission is approved, the pilot scribbles out a flight plan while the ground crew preps one of the four modified Beechcraft King Air C90 planes. There are 24 flares attached to each wing—half Ghaith 1, half Ghaith 2—for a total of 48 kilograms of seeding material on each flight. Timing is important, Newman tells me as we taxi toward the runway. The pilots need to reach the cloud at the optimal moment.
Once we’re airborne, Newman climbs to 6,000 feet. Then, like a falcon riding the thermals, he goes hunting for updrafts. Cloud seeding is a mentally challenging and sometimes dangerous job, he says through the headset, over the roar of the engines. Real missions last up to three hours and can get pretty bumpy as the plane moves between clouds. Pilots generally try to avoid turbulence. Seeding missions seek it out.
When we get to the right altitude, Newman radios the ground for permission to set off the flares. There are no hard rules for how many flares to put into each cloud, one seeding operator told me. It depends on the strength of the updraft reported by the pilots, how things look on the radar. It sounds more like art than science.
Newman triggers one of the salt flares, and I twist in my seat to watch: It burns with a white-gray smoke. He lets me set off one of the nano-flares. It’s slightly anticlimactic: The green lid of the tube pops open and the material spills out. I’m reminded of someone sprinkling grated cheese on spaghetti.
There’s an evangelical zeal to the way some of the pilots and seeding operators talk about this stuff—the rush of hitting a button on an instrument panel and seeing the clouds burst before their eyes. Like gods. Newman shows me a video on his phone of a cloud that he’d just seeded hurling fat drops of rain onto the plane’s front windows. Operators swear they can see clouds changing on the radar.
But the jury is out on how effective hygroscopic seeding actually is. The UAE has invested millions in developing new technologies for enhancing rainfall—and surprisingly little in actually verifying the impact of the seeding it’s doing right now. After initial feasibility work in the early 2000s, the next long-term analysis of the program’s effectiveness didn’t come until 2021. It found a 23 percent increase in annual rainfall in seeded areas, as compared with historical averages, but cautioned that “anomalies associated with climate variability” might affect this figure in unforeseen ways. As Friedrich notes, you can’t necessarily assume that rainfall measurements from, say, 1989 are directly comparable with those from 2019, given that climatic conditions can vary widely from year to year or decade to decade.
The best evidence for hygroscopic seeding, experts say, comes from India, where for the past 15 years the Indian Institute of Tropical Meteorology has been conducting a slow, patient study. Unlike the UAE, India uses one plane to seed and another to take measurements of the effect that has on the cloud. In hundreds of seeding missions, researchers found an 18 percent uptick in raindrop formation inside the cloud. But the thing is, every time you want to try to make it rain in a new place, you need to prove that it works in that area, in those particular conditions, with whatever unique mix of aerosol particles might be present. What succeeds in, say, the Western Ghats mountain range is not even applicable to other areas of India, the lead researcher tells me, let alone other parts of the world.
If the UAE wanted to reliably increase the amount of fresh water in the country, committing to more desalination would be the safer bet. In theory, cloud seeding is cheaper: According to a 2023 paper by researchers at the National Center of Meteorology, the average cost of harvestable rainfall generated by cloud seeding is between 1 and 4 cents per cubic meter, compared with around 31 cents per cubic meter of water from desalination at the Hassyan Seawater Reverse Osmosis plant. But each mission costs as much as $8,000, and there’s no guarantee that the water that falls as rain will actually end up where it’s needed.
One researcher I spoke to, who has worked on cloud-seeding research in the UAE and asked to speak on background because they still work in the industry, was critical of the quality of the UAE’s science. There was, they said, a tendency for “white lies” to proliferate; officials tell their superiors what they want to hear despite the lack of evidence. The country’s rulers already think that cloud seeding is working, this person argued, so for an official to admit otherwise now would be problematic. (The National Center of Meteorology did not comment on these claims.)
By the time I leave Al Ain, I’m starting to suspect that what goes on there is as much about optics as it is about actually enhancing rainfall. The UAE has a history of making flashy announcements about cutting-edge technology—from flying cars to 3D-printed buildings to robotic police officers—with little end product.
Now, as the world transitions away from the fossil fuels that have been the country’s lifeblood for the past 50 years, the UAE is trying to position itself as a leader on climate. Last year it hosted the annual United Nations Climate Change Conference, and the head of its National Center of Meteorology was chosen to lead the World Meteorological Organization, where he’ll help shape the global consensus that forms around cloud seeding and other forms of mass-scale climate modification. (He could not be reached for an interview.)
The UAE has even started exporting its cloud-seeding expertise. One of the pilots I spoke to had just returned from a trip to Lahore, where the Pakistani government had asked the UAE’s cloud seeders to bring rain to clear the polluted skies. It rained—but they couldn’t really take credit. “We knew it was going to rain, and we just went and seeded the rain that was going to come anyway,” he said.
From the steps of the Emirates Palace Mandarin Oriental in Abu Dhabi, the UAE certainly doesn’t seem like a country that’s running out of water. As I roll up the hotel’s long driveway on my second day in town, I can see water features and lush green grass. The sprinklers are running. I’m here for a ceremony for the fifth round of research grants being awarded by the UAE Research Program for Rain Enhancement Science. Since 2015, the program has awarded $21 million to 14 projects developing and testing ways of enhancing rainfall, and it’s about to announce the next set of recipients.
In the ornate ballroom, local officials have loosely segregated themselves by gender. I sip watermelon juice and work the room, speaking to previous award winners. There’s Linda Zou, a Chinese researcher based at Khalifa University in Abu Dhabi who developed the nano-coated seeding particles in the Ghaith 2 flares. There’s Ali Abshaev, who comes from a cloud-seeding dynasty (his father directs Russia’s Hail Suppression Research Center) and who has built a machine to spray hygroscopic material into the sky from the ground. It’s like “an upside-down jet engine,” one researcher explains.
Other projects have been looking at “terrain modification”—whether planting trees or building earthen barriers in certain locations could encourage clouds to form. Giles Harrison, from the University of Reading, is exploring whether electrical currents released into clouds can encourage raindrops to stick together. There’s also a lot of work on computer simulation. Youssef Wehbe, a UAE program officer, gives me a cagey interview about the future vision: pairs of drones, powered by artificial intelligence, one taking cloud measurements and the other printing seeding material specifically tailored for that particular cloud—on the fly, as it were.
I’m particularly taken by one of this year’s grant winners. Guillaume Matras, who worked at the French defense contractor Thales before moving to the UAE, is hoping to make it rain by shooting a giant laser into the sky. Wehbe describes this approach as “high risk.” I think he means “it may not work,” not “it could set the whole atmosphere on fire.” Either way, I’m sold.
So after my cloud-seeding flight, I get a lift to Zayed Military City, an army base between Al Ain and Abu Dhabi, to visit the secretive government-funded research lab where Matras works. They take my passport at the gate to the compound, and before I can go into the lab itself I’m asked to secure my phone in a locker that’s also a Faraday cage—completely sealed to signals going in and out.
After I put on a hairnet, a lab coat, and tinted safety goggles, Matras shows me into a lab, where I watch a remarkable thing. Inside a broad, black box the size of a small television sits an immensely powerful laser. A tech switches it on. Nothing happens. Then Matras leans forward and opens a lens, focusing the laser beam.
There’s a high-pitched but very loud buzz, like the whine of an electric motor. It is the sound of the air being ripped apart. A very fine filament, maybe half a centimeter across, appears in midair. It looks like a strand of spider’s silk, but it’s bright blue. It’s plasma—the fourth state of matter. Scale up the size of the laser and the power, and you can actually set a small part of the atmosphere on fire. Man-made lightning. Obviously my first question is to ask what would happen if I put my hand in it. “Your hand would turn into plasma,” another researcher says, entirely deadpan. I put my hand back in my pocket.
Matras says these laser beams will be able to enhance rainfall in three ways. First, acoustically—like the concussion theory of old, it’s thought that the sound of atoms in the air being ripped apart might shake adjacent raindrops so that they coalesce, get bigger, and fall to earth. Second: convection—the beam will create heat, generating updrafts that will force droplets to mix. (I’m reminded of a never-realized 1840s plan to create rain by setting fire to large chunks of the Appalachian Mountains.) Finally: ionization. When the beam is switched off, the plasma will reform—the nitrogen, hydrogen, and oxygen molecules inside will clump back together into random configurations, creating new particles for water to settle around.
The plan is to scale this technology up to something the size of a shipping container that can be put on the back of a truck and driven to where it’s needed. It seems insane—I’m suddenly very aware that I’m on a military base. Couldn’t this giant movable laser be used as a weapon? “Yes,” Matras says. He picks up a pencil, the nib honed to a sharp point. “But anything could be a weapon.”
These words hang over me as I ride back into the city, past lush golf courses and hotel fountains and workmen swigging from plastic bottles. Once again, there’s not a cloud in the sky. But maybe that doesn’t matter. For the UAE, so keen to project its technological prowess around the region and the world, it’s almost irrelevant whether cloud seeding works. There’s soft power in being seen to be able to bend the weather to your will—in 2018, an Iranian general accused the UAE and Israel of stealing his country’s rain.
Anything could be a weapon, Matras had said. But there are military weapons, and economic weapons, and cultural and political weapons too. Anything could be a weapon—even the idea of one.
The Best Smart TVs to Buy in 2024
- Variety
- Best Laptops for Back-To-School Season: Here Are The Best Devices, From Apple and Lenovo to Asus, and Razer
Best Laptops for Back-To-School Season: Here Are The Best Devices, From Apple and Lenovo to Asus, and Razer
How accurate are wearable fitness trackers? Less than you might think
Back in 2010, Gary Wolf, then the editor of Wired magazine, delivered a TED talk in Cannes called “the quantified self.” It was about what he termed a “new fad” among tech enthusiasts. These early adopters were using gadgets to monitor everything from their physiological data to their mood and even the number of nappies their children used.
Wolf acknowledged that these people were outliers—tech geeks fascinated by data—but their behavior has since permeated mainstream culture.
From the smartwatches that track our steps and heart rate, to the fitness bands that log sleep patterns and calories burned, these gadgets are now ubiquitous. Their popularity is emblematic of a modern obsession with quantification—the idea that if something isn’t logged, it doesn’t count.
Harnessing the Power of Educational Technology
Educational technology has revolutionized the way we learn and teach, providing innovative tools and resources that enhance the educational experience. From interactive apps to virtual reality, educational technology is transforming traditional classrooms and making learning more accessible and engaging. As an expert in Technology and Gadgets, I will explore the various facets of educational…
Election Disinformation From Elon Musk Is Drawing Billions of Views on X
Elon Musk is not just the Trump-supporting owner of the social media platform X, formerly known as Twitter. It turns out he is also one of the platform’s biggest peddlers of election-related disinformation, according to a new report published Thursday by the Center for Countering Digital Hate.
The report from CCDH, a nonprofit organization focused on protecting civil liberties and holding social media companies accountable, found that 50 false or misleading posts shared by Musk on X between January 1 and July 31 of this year racked up a staggering 1.2 billion views. The group categorized the posts under three main themes: false claims that Democrats are “importing voters” through illegal immigration (the bulk of the content that researchers examined); false claims that voting is vulnerable to fraud; and a manipulated video, also known as a deepfake, of Vice President Kamala Harris.
According to the report, while independent fact-checkers found the content in all of those 50 posts shared by Musk to be false or misleading, none of the posts in question contained a “community note,” X’s user-generated fact-checking system that the company promise’s can contextualize “potentially misleading posts.” Just this week, Musk claimed in a post on X that community notes offer “a clear and immediate way to refute anything false in the replies,” adding, “the same is not true for legacy media who lie relentlessly, but there is no way to counter their propaganda.”
Imran Ahmed, CEO of the CCDH, said in a statement accompanying the report that Musk “is abusing his privileged position as owner of a small, but politically influential, social media platform to sow disinformation that generates discord and distrust.”
X responded to a request for comment from Mother Jones with an automated message saying, “busy now, please check back later.” (The company may have retired its automated poop emoji.) Musk endorsed Donald Trump for president last month, after Trump nearly was assassinated.
As I reported recently, in addition to being false or misleading, at least some of this content appears to violate X’s own terms of service. On July 26, Musk shared a deepfake that falsely appeared to show Harris calling herself “the ultimate diversity hire” and degrading President Biden. “This is amazing,” Musk wrote in his post sharing the video, accompanied by a laughing emoji. Musk’s post has received more than 135 million views, and it remains online—despite the fact that, as the CCDH report notes, X’s policy prohibits the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”
X says it only deletes such posts in cases of “high-severity violations of the policy, including misleading media that have a serious risk of harm to individuals or communities”—though it does not define how it measures “high-severity violations” or “serious risk of harm.”
And while Musk personally has massive reach, with more than 193 million followers on X, the problems are systemic, allowing other users who have significant reach also to spread political disinformation. One example: As I reported on Sunday, an account that reposts Donald Trump’s feed from Truth Social on X shared an obviously manipulated video of Harris that appeared to show her struggling to complete a sentence. Trump first posted the video to his Truth Social platform on Saturday, though it’s unclear who originally altered the video. When Mother Jones exposed its spread on X on Sunday, it had drawn more than 620,000 views, and bore no indication that it clearly was doctored footage.
When I inquired with the Trump campaign about the video, spokesperson Steven Cheung just asserted (profanely) that it was authentic. But by Monday, following my inquiry to X about the video, the post on X had been updated with a label calling it “manipulated media”—though the video remains up. (Cheung did not respond to a further request for comment.)
The CCDH report comes as the latest example of the growing scrutiny of X and its platforming of disinformation targeting Harris ahead of the November election. On Monday, five secretaries of state sent Musk a letter demanding he “immediately implement changes” to Grok, the AI-powered search assistant available to premium subscribers on X, after it falsely told users that Harris declared her candidacy too late to appear on ballots in nine states.
The scrutiny does not appear to concern Musk. This week, the X owner has continued attacking Harris on the platform he purchased in 2022, baselessly claiming Harris is “quite literally a Communist.” Expect more to come, especially given that Musk, according to Trump, is reportedly set to “interview” Trump on Monday. If that conversation occurs, it isn’t likely to focus on or stick to facts; as my colleague Mark Follman points out, Musk clearly is not a journalist.
Election Disinformation From Elon Musk Is Drawing Billions of Views on X
Elon Musk is not just the Trump-supporting owner of the social media platform X, formerly known as Twitter. It turns out he is also one of the platform’s biggest peddlers of election-related disinformation, according to a new report published Thursday by the Center for Countering Digital Hate.
The report from CCDH, a nonprofit organization focused on protecting civil liberties and holding social media companies accountable, found that 50 false or misleading posts shared by Musk on X between Jan. 1 and July 31 of this year racked up a staggering 1.2 billion views. The group categorized the posts under three main themes: false claims that Democrats are “importing voters” through illegal immigration (the bulk of the content that researchers examined); false claims that voting is vulnerable to fraud; and a manipulated video, also known as a deepfake, of Vice President Kamala Harris.
According to the report, while independent fact-checkers found the content in all of those 50 posts shared by Musk to be false or misleading, none of the posts in question contained a “community note,” X’s user-generated fact-checking system that the company promise’s can contextualize “potentially misleading posts.” Just this week, Musk claimed in a post on X that community notes offer “a clear and immediate way to refute anything false in the replies,” adding, “the same is not true for legacy media who lie relentlessly, but there is no way to counter their propaganda.”
Imran Ahmed, CEO of the CCDH, said in a statement accompanying the report that Musk “is abusing his privileged position as owner of a small, but politically influential, social media platform to sow disinformation that generates discord and distrust.”
X responded to a request for comment from Mother Jones with an automated message saying, “busy now, please check back later.” (The company may have retired its automated poop emoji.) Musk endorsed Donald Trump for president last month, after Trump nearly was assassinated.
As I reported recently, in addition to being false or misleading, at least some of this content appears to violate X’s own terms of service. On July 26, Musk shared a deepfake that falsely appeared to show Harris calling herself “the ultimate diversity hire” and degrading President Biden. “This is amazing,” Musk wrote in his post sharing the video, accompanied by a laughing emoji. Musk’s post has received more than 135 million views, and it remains online—despite the fact that, as the CCDH report notes, X’s policy prohibits the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”
X says it only deletes such posts in cases of “high-severity violations of the policy, including misleading media that have a serious risk of harm to individuals or communities”—though it does not define how it measures “high-severity violations” or “serious risk of harm.”
And while Musk personally has massive reach, with more than 193 million followers on X, the problems are systemic, allowing other users who have significant reach also to spread political disinformation. One example: As I reported on Sunday, an account that reposts Donald Trump’s feed from Truth Social on X shared an obviously manipulated video of Harris that appeared to show her struggling to complete a sentence. Trump first posted the video to his Truth Social platform on Saturday, though it’s unclear who originally altered the video. When Mother Jones exposed its spread on X on Sunday, it had drawn more than 620,000 views, and bore no indication that it clearly was doctored footage.
When I inquired with the Trump campaign about the video, spokesperson Steven Cheung just asserted (profanely) that it was authentic. But by Monday, following my inquiry to X about the video, the post on X had been updated with a label calling it “manipulated media”—though the video remains up. (Cheung did not respond to a further request for comment.)
The CCDH report comes as the latest example of the growing scrutiny of X and its platforming of disinformation targeting Harris ahead of the November election. On Monday, five secretaries of state sent Musk a letter demanding he “immediately implement changes” to Grok, the AI-powered search assistant available to premium subscribers on X, after it falsely told users that Harris declared her candidacy too late to appear on ballots in nine states.
The scrutiny does not appear to concern Musk. This week, the X owner has continued attacking Harris on the platform he purchased in 2022, baselessly claiming Harris is “quite literally a Communist.” Expect more to come, especially given that Musk, according to Trump, is reportedly set to “interview” Trump on Monday. If that conversation occurs, it isn’t likely to focus on or stick to facts; as my colleague Mark Follman points out, Musk clearly is not a journalist.
The Future of the Border Is Even More Dystopian Than You Thought
For our September+October issue, we investigated the Border Patrol’s sharp growth, its troubling record on civil liberties, its culture of impunity, and its role in shaping the current political moment—one that echoes the anti-immigrant fever that led to the agency’s creation a century ago. Read the whole package here.
It was dawn and we were in Sunland Park, New Mexico, a few hundred feet from the border, watching the US government surveillance towers that watch all of us. They were positioned atop a bald hillside, taking in a constant stream of images from all angles. One tower, a sleek, 33-foot telescoping pole built by Anduril Industries, the defense contractor run by Oculus founder Palmer Luckey and funded by PayPal co-founder and GOP megadonor Peter Thiel, was capable of recording night-vision images and spotting human beings up to 1.7 miles away.
My companions were Tucson-based photographers documenting the growing landscape of border surveillance mechanisms. They were working with the Electronic Frontier Foundation (EFF) to map all such towers along the border—a vast, expensive, and increasingly automated network that is now effectively an electronic wall.
Less than 10 miles away, in El Paso, the importance of automated surveillance and artificial intelligence was the theme of the annual Border Security Expo, a massive conference that drew roughly 1,700 attendees in 2023. Many of them were employees of the “industry partners” that market and sell such technology to representatives from 46 state agencies; they were joined by overseas buyers and a handful of academics. “Border Security Expo [is] the best place to gain access to this hard-to-reach, highly qualified audience,” the exhibitor prospectus boasted.
“This is a partnership,” Border Patrol Chief Jason Owens said during the opening panel. “We’re expressing to you the things that we need and relying on the big brains in this room and your companies to come up with the next way forward.”
The conference featured panels such as “Border of the Future” and “DHS Acquisition: Tone From the Top.” Inside the exhibition hall—a large, fluorescent-lit chamber resembling the belly of a colossal blimp—visitors were met by a Verizon-built robot dog performing an uncanny march: forward like an old-timey soldier, side to side like a jittery crab.
Automated ground surveillance vehicles, as the dogs are known, can lend “a helping hand (or ‘paw’) with new technology that can assist with enhancing the capabilities of Customs and Border Protection (CBP) personnel, while simultaneously increasing their safety downrange.” These dogs are ready to be outfitted with cameras, sensors, and radio.
Other exhibitors featured “No BS” canine food to help optimize real-life working dogs; virtual reality training systems that sharpen law enforcement’s shooting skills; all-terrain tanks; heavy-duty cargo e-bikes; mobile fences; and guns, of course. Occasionally, the displays reminded attendees of the true adversary most border technologies targeted: people. A heat-sensing camera that works from miles away, for instance, and sensors that can detect a human heartbeat hidden in a vehicle.
Like nearly everyone else, CBP leadership has a serious case of AI fever, and officials make clear that this kind of technology acts as a “force multiplier” to Border Patrol agents themselves. Surveillance tower cameras and drones can alert agents when a vehicle or person comes into view and help CBP ascertain the threat level. AI tools also help screen cargo coming into the country and scour data from CBP One—a notoriously glitchy app that asylum seekers must use to navigate their legal process—to detect cases of suspicious identity.
Just last year, CBP’s AI monitoring system flagged “a suspicious pattern in the border crossing history” of a car in Southern California. Upon further review, 75 kilos of drugs were found in the vehicle, and the driver was arrested.
AI and machine learning at the border aren’t entirely new. The first autonomous towers were installed in 2018, and two years later, the Trump administration brokered a deal with Anduril. (Luckey, the brother-in-law of Florida Rep. Matt Gaetz, donated $100,000 toward Trump’s inaugural celebrations in 2017.) Trump’s bombastic rhetoric has always focused on mass deportations and his cherished border wall. But all along his administration was building a surveillance apparatus that the Biden White House has since expanded—and that could be the single most powerful tool in the hands of a second Trump administration to carry out extrajudicial exclusion at the US border. One that could be used against its citizens, too.
But this would first require Border Patrol to effectively analyze all the data it’s collecting. At the Expo, Border Patrol officials routinely noted that AI surveillance tools have amassed so much information that CBP needs machine learning tools to make any sense of it. “In the past we were looking at hundreds of millions of nodes of data,” said Ray Shuler, DHS’s assistant director of cyber and operations technology. “Now we’re looking at multibillion-node graphs.” Shuler says his unit alone is running up to 400 servers at any given time and is constantly in need of more storage capacity.
But managing this data, said Joshua Powell, CBP’s director of AI implementation, is what “will give us the advantage over our adversary. They have the resources. They have the money. They have connections.” Officials invoked the “adversary” repeatedly throughout the convention—a militarized villain, and a mushy one at that. But who, exactly, was this well-heeled, tricked-out, tech-savvy enemy amassing at our gates? It could be anyone—which is why we need constant surveillance.
For its part, the Biden administration has insisted on the responsible use of AI. In 2023, DHS named tech specialist Eric Hysen as the department’s first chief AI officer, issued a departmental framework for responsible AI, and launched the AI Corps, a team of 50 experts to better monitor and implement the technology. “AI is going to make us bigger and faster and stronger—it’s not going to make us any less accountable,” Hysen claims. Yet as EFF investigations director Dave Maass points out, such administrative guidelines and bodies rarely have any teeth—and could be easily dismissed or dismantled under a new administration.
At the Expo, Border Patrol officials insisted that their work is saving lives—and that the latest technological acquisitions support this mission. But some border tech is inherited from war zones or inspired by them; notably, many of the vendors also contract with the Department of Defense. As Harvard researcher Petra Molnar, author of The Walls Have Eyes, argues, border zones are perfect test sites for technologies with questionable human rights applications, since they’re often obscured from public view. Once refined and normalized at the border, they can more easily slip into the mainstream—iris scans at airports, for instance, or automated traffic tickets issued to anyone who runs red lights (which the Texas legislature outlawed in 2019). Maass argues that surveillance reliant upon algorithmic technology can make mistakes—with consequences that can be dangerous for the person on the other end.
Those of us who live far from the border might imagine surveillance towers situated in remote swaths of the desert. Some of them are. But often they are positioned in border towns near schools and downtown shopping centers, on Native American reservations, and alongside the highways where we all drive. “We are actually talking about a surveillance network that monitors communities…that have nothing to do with transport or crime,” Maass told me. “They are just living their lives, doing their thing, but they’ve got the CBP tower looking in their window.”
US border defense is ever-expanding in reach—moving not just deep into our country’s interior, but also far beyond our own walls. “Most people don’t know there are Border Patrol agents today deployed around the globe in dangerous areas,” Chief Owens explained to the crowd on the Expo’s opening day, “with the express purpose of making sure that they can stop the threat from ever reaching our borders in the first place.”
Powell, too, spoke of the need to “[push] our borders out beyond what we’ve traditionally been focused on, an outline of the United States…out through Western and Eastern hemispheres to identify who is thinking, planning, and attempting to make entry into the US and then why.” By collecting and sharing data with intelligence agencies across international borders, the thinking goes, we’ll be better able to defend our own. Ultimately, the future of the border is one of endless expansion and externalization—well staffed and automated, optimized by artificial intelligence, and implemented by men in green.
When he was in El Paso for the Expo, Dugan Meyer, a graduate student and one of the photographers contributing to EFF’s countersurveillance map, headed out in the late afternoon to New Mexico’s Mount Cristo Rey—a bare, rugged peak adorned with a giant white statue of Christ from the 1930s. Here, the insistent advance of the border wall is briefly broken by the base of the mountain and becomes a major hotspot for Border Patrol activity, migrant crossings, and deaths. That night, Meyer hung out near the wall in the brush, watching as helicopters patrolled the skies while Border Patrol trucks scoured the dark. At one point, Meyer heard someone climb the wall from the Mexican side and drop down into the United States. Meyer saw the man step carefully over railroad tracks and then disappear into the scrub.
Patrol forces reappeared within minutes, as if something had alerted them to the crossing. Perhaps something had. For the next hour, Meyer watched the hunt. The Border Patrol has access to heat-seeking cameras, surveillance towers, drones, helicopters, and ground sensors. The man was racing this vast, mechanized force, all odds seemingly against him, and yet every day, in spite of the billions of dollars spent to stop them, people like him manage to get through. For Border Patrol authorities, however, this becomes one more piece of evidence that they need more of everything: funding, agents, towers, robot dogs.
This endless expansion is the reality the Expo was selling—and, maybe more importantly, banking on.
This story was supported by the Pulitzer Center. Read the rest of our Border Patrol investigation here.
Image credits: Allison Dinner/AFP/Getty; Rebekah Zemansky/Shutterstock, Shutterstock (3)
Elon Musk’s X Is Under Scrutiny for Disinformation Targeting Kamala Harris
Elon Musk has said he wants X, formerly known as Twitter, to be the “public square” of the internet, an essential place for discourse and democracy. But there’s a major problem: Disinformation is running rampant on X in the lead-up to the November election, including content targeting Democratic presidential candidate Kamala Harris. As Mother Jones reported on Sunday, that content includes deepfakes shared by Musk himself and by an account that reposts Donald Trump’s feed from Truth Social. Musk, who has owned X since fall 2022, has endorsed Trump for president.
Leading election officials have grown concerned about potential harm from X. On Monday, five secretaries of state—those from Minnesota, Pennsylvania, Michigan, Washington, and New Mexico—sent Musk a letter demanding he “immediately implement changes” to Grok, the AI-powered search assistant available to premium subscribers on X, after it informed users that Harris declared her candidacy too late to appear on ballots in nine states. That is false.
The letter states that the false information was shared “repeatedly in multiple posts—reaching millions of people,” and continually disseminated by Grok until it was finally corrected, 10 days after Biden dropped out of the race and endorsed Harris’s candidacy. Minnesota Secretary of State Steve Simon, who spearheaded the letter-writing initiative, told the Washington Post: “This is a case where the owner of the public square (the social media company itself) is the one who introduced and spread the bad information—and then delayed correcting its own mistake after it knew that the information was false.”
Simon and the other signatories represent five of the nine states that were the subject of the disinformation; the others include Indiana, Alabama, Ohio, and Texas. (A spokesperson for Simon said his office offered for all nine states to take part in the letter.) Four of the signatories to the letter are Democratic secretaries of state, apart from Al Schmidt of Pennsylvania, a Republican.
The letter asks Musk to follow OpenAI’s lead by directing Grok users to CanIVote.org, a nonpartisan website focused on voter registration, when users ask about elections. But that seems unlikely, given that Musk previously described OpenAI’s ChatGPT as being “deeply ingrained” with what he calls the “woke mind virus.” And when X launched Grok last November, the company described it as being designed to “answer spicy questions that are rejected by most other AI systems.”
Spokespeople for X and the Harris campaign did not respond to requests for comment from Mother Jones regarding the letter to Musk.
In recent days, two phony videos of Harris have circulated on X—thanks to Trump and Musk. As I detailed on Sunday, those videos have racked up tens of millions of views despite appearing to be in violation of X’s own terms of service, which prohibit the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”
The first, a doctored video known as a “deepfake,” was shared by Musk himself on July 26, and featured fake audio that depicted Harris calling herself “the ultimate diversity hire” and degrading President Biden. The post remains up on Musk’s account—where he calls the video “amazing,” alongside a laughing emoji—and has drawn more than 134 million views.
The second deepfake features doctored video of Harris derived from remarks she made after the release on Friday of Americans wrongfully imprisoned by Russia. Trump shared the phony video on Truth Social on Saturday; it was soon reshared on X by an account that posts Trump’s Truth Social content verbatim. The video had received more than 764,000 views on X as of Tuesday.
It is unclear who doctored and first posted that phony video on Truth Social. In response to specific questions from Mother Jones on Sunday about the deepfake, Trump spokesman Steven Cheung replied, “your phone or computer must be fucked up because the audio/video matches up.”
But by Monday afternoon—about 24 hours after Mother Jones first inquired with the Trump campaign and X about the deepfake—X labeled it “manipulated media,” leaving it online. (The platform’s policy says it adds such labels when it does not remove content, a step it only takes in instances of “high-severity violations.”)
Cheung did not respond to a follow-up question about the “manipulated media” label.
As I also reported on Sunday, the doctored videos and the false election claims from Grok are not the first disinformation targeting Harris on X. They are unlikely to be the last, in light of Musk’s full-throated support for Trump and penchant for provocation. On Tuesday, Musk declared “war” on advertisers that X alleges illegally boycotted the platform over politics.
Elon Musk’s X Is Spreading Deepfakes of Kamala Harris
For the second time in less than two weeks, a doctored video of Vice President Kamala Harris has spread widely on Elon Musk’s social media platform X.
A video known as a “deepfake” that was posted on X on Saturday appears to show Harris repeating herself over and over again, using a crude audio rendering made to seem like Harris is struggling to finish a complete sentence. The altered video uses footage from an appearance by Harris and President Joe Biden following Friday’s historic prisoner swap that freed Wall Street Journal reporter Evan Gershkovich and others. The video is obviously manipulated and easily debunked by viewing the unaltered footage (you can watch that here at about the 1:30 mark), which shows Harris speaking smoothly, without repeating the same words and phrases as portrayed in the doctored video.
The video, whose origin is unclear, was posted by Trump himself on Truth Social on Saturday, accompanied by a rant in which he calls Harris “DUMB!” and “extremely Low IQ.” The video was soon re-shared on X by an account that posts content verbatim from Trump’s feed on Truth Social. That account on X has more than 800,000 followers, and, as of late Sunday, the post containing the Harris deepfake had drawn more than 620,000 views.
The video appears to be in violation of X’s terms of service, which prohibit the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.” (It also appears to violate Truth Social’s terms of service, which requires that posts “are not false, inaccurate, or misleading”—a tall order, perhaps, given that platform’s owner.)
This latest deepfake comes after one shared by Musk himself eight days earlier, which doctored a high-profile Harris political ad titled “Freedom.” In that one, the fake audio using Harris’s voice depicted her calling herself “the ultimate diversity hire” and degrading President Biden. “This is amazing,” Musk wrote in his post sharing the video, accompanied by a laughing emoji. Musk’s post, still online, has received more than 134 million views.
When asked for comment about the latest video falsely showing Harris garbling her words, Trump spokesman Steven Cheung doubled down on the content, claiming that the obviously phony video was authentic. (“Your phone or computer must be fucked up,” he said.) Cheung did not respond to questions about who may have doctored the video and whether Trump’s post on Truth Social violates that platform’s terms of service.
Spokespeople for X did not respond to emailed questions about these videos violating the site’s terms of service, whether X is taking any steps to crack down on deepfakes, or why the one posted by Musk on July 26 remains online.
Other forms of disinformation targeting Harris, including racist and misogynistic content, have proliferated across social media. But the reach personally enjoyed by Musk and Trump through the platforms they own is bigger than that of most.
As Harris has been steadily gaining on Trump in recent polls, the spread of deepfakes targeting her on X seems no coincidence alongside Musk’s evolving views about Trump. He once declared the ex-president too old to hold office again. Now Trump has Musk’s “full endorsement.”
Thank God Google Pulled That Awful and Depressing AI Ad
It was about this time last week when I first saw that ad for Gemini, Google’s new artificial-intelligence chatbot. If you’ve been watching much of the Paris Olympics, you know the one I’m thinking of. The spot, called “Dear Sydney,” features a father whose daughter idolizes the American hurdler Sydney McLaughlin-Levrone. His daughter “wants to show Sydney some love,” and while he describes himself as “pretty good with words,” the dad adds, “This has to be just right.”
“Gemini, help my daughter write a letter telling Sydney how inspiring she is,” he types, “and be sure to mention that my daughter plans on breaking her world record one day.”
I have never seen an ad that made me so thoroughly depressed about the product it was selling, and I watch Trump ads for my job. Why would you get a robot to write a fan letter from your daughter? What other meaningful personal interactions are we supposed to want to swap out with a multimodal large language model?
And apparently I’m not alone. For days I kept seeing people bring up the ad. It “takes a little chunk out of my soul every time I see it,” New York magazine contributing editor Will Leitch wrote, in one representative take. On Friday, Hollywood Reporter offered some good news: Google was taking the spot off the airwaves. We did it, Joe.
When I watched it, I was struck not by the obvious soullessness, but of the collective arrogance that went into making it. This was the outward expression of an industry that seemingly has no self-awareness of the considerable misgivings people have it, or simply doesn’t care. “Dear Sydney” was as honest as it was bleak: This is what the people pushing AI like about AI. These are the people who watched Her but missed the point.
The use case for this kind of AI, at this particular moment, is to take the work that a human can do with care and replace it with a bot that can neither feel nor think. In a lot of cases, the hope—the hope!—is that jobs people do now will not exist. But in plenty of other cases, it will just make the jobs that people do have a little bit more soul-crushing. It is grim but understandable that tech oligarchs find this desirable. But it portends something far darker about the world, I think, if it turns out that vast numbers of people really are clamoring for “art” without artists, “news” without news outlets, and letters from children without the children.
Just a few months ago, a similar ad from Apple was pulled after producing a similar response. That one featured a hydraulic press crushing various tools of human creativity—paint, musical instruments, sculptures. These AI products, such as they are, are aimed at people who wish they could outsource and rip off the things that actually make us human. All you can do is keep shouting at these weirdos until they retreat.
Elon Musk’s X Is Spreading Deepfakes of Kamala Harris
For the second time in less than two weeks, a doctored video of Vice President Kamala Harris has spread widely on Elon Musk’s social media platform X.
A video known as a “deepfake” that was posted on X on Saturday appears to show Harris repeating herself over and over again, using a crude audio rendering made to seem like Harris is struggling to finish a complete sentence. The altered video uses footage from an appearance by Harris and President Joe Biden following Friday’s historic prisoner swap that freed Wall Street Journal reporter Evan Gershkovich and others. The video is obviously manipulated and easily debunked by viewing the unaltered footage (you can watch that here at about the 1:30 mark), which shows Harris speaking smoothly, without repeating the same words and phrases as portrayed in the doctored video.
The video, whose origin is unclear, was posted by Trump himself on Truth Social on Saturday, accompanied by a rant in which he calls Harris “DUMB!” and “extremely Low IQ.” The video was soon re-shared on X by an account that posts content verbatim from Trump’s feed on Truth Social. That account on X has more than 800,000 followers, and, as of late Sunday, the post containing the Harris deepfake had drawn more than 620,000 views.
The video appears to be in violation of X’s terms of service, which prohibit the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.” (It also appears to violate Truth Social’s terms of service, which requires that posts “are not false, inaccurate, or misleading”—a tall order, perhaps, given that platform’s owner.)
This latest deepfake comes after one shared by Musk himself eight days earlier, which doctored a high-profile Harris political ad titled “Freedom.” In that one, the fake audio using Harris’s voice depicted her calling herself “the ultimate diversity hire” and degrading President Biden. “This is amazing,” Musk wrote in his post sharing the video, accompanied by a laughing emoji. Musk’s post, still online, has received more than 134 million views.
When asked for comment about the latest video falsely showing Harris garbling her words, Trump spokesman Steven Cheung doubled down on the content, claiming that the obviously phony video was authentic. (“Your phone or computer must be fucked up,” he said.) Cheung did not respond to questions about who may have doctored the video and whether Trump’s post on Truth Social violates that platform’s terms of service.
Spokespeople for X did not respond to emailed questions about these videos violating the site’s terms of service, whether X is taking any steps to crack down on deepfakes, or why the one posted by Musk on July 26 remains online.
Other forms of disinformation targeting Harris, including racist and misogynistic content, have proliferated across social media. But the reach personally enjoyed by Musk and Trump through the platforms they own is bigger than that of most.
As Harris has been steadily gaining on Trump in recent polls, the spread of deepfakes targeting her on X seems no coincidence alongside Musk’s evolving views about Trump. He once declared the ex-president too old to hold office again. Now Trump has Musk’s “full endorsement.”
Thank God Google Pulled That Awful and Depressing AI Ad
It was about this time last week when I first saw that ad for Gemini, Google’s new artificial-intelligence chatbot. If you’ve been watching much of the Paris Olympics, you know the one I’m thinking of. The spot, called “Dear Sydney,” features a father whose daughter idolizes the American hurdler Sydney McLaughlin-Levrone. His daughter “wants to show Sydney some love,” and while he describes himself as “pretty good with words,” the dad adds, “This has to be just right.”
“Gemini, help my daughter write a letter telling Sydney how inspiring she is,” he types, “and be sure to mention that my daughter plans on breaking her world record one day.”
I have never seen an ad that made me so thoroughly depressed about the product it was selling, and I watch Trump ads for my job. Why would you get a robot to write a fan letter from your daughter? What other meaningful personal interactions are we supposed to want to swap out with a multimodal large language model?
And apparently I’m not alone. For days I kept seeing people bring up the ad. It “takes a little chunk out of my soul every time I see it,” New York magazine contributing editor Will Leitch wrote, in one representative take. On Friday, Hollywood Reporter offered some good news: Google was taking the spot off the airwaves. We did it, Joe.
When I watched it, I was struck not by the obvious soullessness, but of the collective arrogance that went into making it. This was the outward expression of an industry that seemingly has no self-awareness of the considerable misgivings people have it, or simply doesn’t care. “Dear Sydney” was as honest as it was bleak: This is what the people pushing AI like about AI. These are the people who watched Her but missed the point.
The use case for this kind of AI, at this particular moment, is to take the work that a human can do with care and replace it with a bot that can neither feel nor think. In a lot of cases, the hope—the hope!—is that jobs people do now will not exist. But in plenty of other cases, it will just make the jobs that people do have a little bit more soul-crushing. It is grim but understandable that tech oligarchs find this desirable. But it portends something far darker about the world, I think, if it turns out that vast numbers of people really are clamoring for “art” without artists, “news” without news outlets, and letters from children without the children.
Just a few months ago, a similar ad from Apple was pulled after producing a similar response. That one featured a hydraulic press crushing various tools of human creativity—paint, musical instruments, sculptures. These AI products, such as they are, are aimed at people who wish they could outsource and rip off the things that actually make us human. All you can do is keep shouting at these weirdos until they retreat.
Rideshare to Vote? Not So Easy When You’re Disabled.
In 2020, Sharon Giovinazzo, who is blind as a complication of multiple sclerosis, wanted to vote independently—and in person. She knew that electronic voting machines in Little Rock, Arkansas, then her home, were her only option.
Giovinazzo called an Uber to take her and her guide dog to the polls. The first three canceled on her. Giovinazzo, now CEO of LightHouse for the Blind and Visually Impaired in San Francisco, knew that was a possibility.
“You lose that autonomy of just being able to go where you want, when you want,” Giovinazzo said, “and do what you want.” (As an unfortunate bonus, the accessible voting machine at her polling place wasn’t working—someone had to help cast her ballot despite her efforts.)
As more and more states clamp down on mail-in voting—in Texas, for instance, where it’s difficult to vote by mail, ballots can be rejected if a poll worker thinks their signature doesn’t match one on file—there a greater urgency to make it physically practical to get to voting booths, even among people comfortable with absentee voting. In addition, a joint survey by the US Election Assistance Commission and Rutgers University following the 2022 midterms found that nearly half of disabled voters prefer voting in person.
Scheduling paratransit to the polls is one option—but in rural areas, that can also be more challenging than it should, says Michelle Bishop, manager of voter access at the National Disability Rights Network.
“Even if you can get a pickup to take you to your polling place, you have no idea when you’re going to need a ride home,” Bishop said. “That’s something that you would have had to have scheduled well in advance.”
RideShare2Vote was founded by Sarah Kovich and her daughter Paola in 2018 in Texas to help Democrats and left-leaning independents vote. In addition to utilizing regular cars, RideShare2Vote rents out accessible vans to take voters to the polls free of charge, including in rural areas. Drivers also receive training to understand voting rights. “Every voter that a [Rideshare2Vote] driver has ever taken has been able to cast a ballot with us,” Kovich said. “No one’s been turned away.”
The organization operates in more than a dozen states—mostly Republican strongholds and swing states. But, again, those plans need to be etched out before voting day—and its budget only lets it take some 12,500 voters to the polls in a given election year.
In an ideal world, rideshare apps like Lyft and Uber could be a great alternative for people without ready access to paratransit. But the issue remains that drivers cancel on riders (which has been the subject of court settlements), and accessible vehicles are not readily available. One wheelchair user seeking to vote in person also shared a video with Mother Jones of a driver refusing them a ride because of their wheelchair. A 2018 report from New York Lawyers for the Public Interest found that, when requested, barely half of New York City riders received accessible vehicles from Uber—and below five percent for Lyft.
Training drivers not to discriminate against disabled people, and having accessible vehicles available, should be universal to rideshare firms. But it isn’t.
In statements to Mother Jones, both Uber and Lyft said they didn’t tolerate discrimination, and that they encourage disabled riders who have experienced it to submit reports. But it happens too often to report every time, and disabled people often face stigma for filing complaints, especially ones under the Americans with Disabilities Act.
Other organizations have also partnered with ride-share companies to get people to the polls: this year, NAACP is partnering with Lyft to do just that for Black voters, who often face disenfranchisement. Asked how NAACP would work with Lyft to make sure Black disabled voters weren’t turned away, its national mobilization director, Tyler Sterling, said in a statement that the organization is “working closely […] to ensure their drivers are equipped with the necessary cultural competency” to help all Black voters looking to participate.
The expense of transportation means there’s no simple, perfect solution to help disabled people vote, said Bishop, of the National Disability Rights Network. Bishop says that makes it crucial to fight for a voting system “where we have just a whole menu of options for voters, and they can figure out what makes it work for them”—a challenge not only due to expenses, but both ableism and rising voter suppression.
Is AI Really an Existential Threat to Humanity?
Artificial intelligence, we have been told, is all but guaranteed to change everything. Often, it is foretold as bringing a series of woes: “extinction,” “doom,”; AI is at risk of “killing us all.” US lawmakers have warned of potential “biological, chemical, cyber, or nuclear” perils associated with advanced AI models and a study commissioned by the State Department on “catastrophic risks,” urged the federal government to intervene and enact safeguards against the weaponization and uncontrolled use of this rapidly evolving technology. Employees at some of the main AI labs have made their safety concerns public and experts in the field, including the so-called “godfathers of AI,” have argued that “mitigating the risk of extinction from AI” should be a global priority.
Advancements in AI capabilities have heightened fears of the possible elimination of certain jobs and the misuse of the technology to spread disinformation and interfere in elections. These developments have also brought about anxiety over a hypothetical future where Artificial General Intelligence systems can outperform humans and, worst case scenario, exterminate humankind.
But the conversation around the disruptive potential of artificial intelligence, argues AI researcher Blaise Agüera y Arcas, CTO of Technology & Society at Google and author of Who Are We Now?, a data-driven book about human identity and behavior, shouldn’t be polarized between AI doomers and deniers. “Both perspectives are rooted in zero-sum,” he writes in the Guardian, “us-versus-them thinking.”
So how worried should we really be? I posed that question to Agüera y Arcas, who sat down with Mother Jones at the Aspen Ideas Festival last month to talk about the future of AI and how we should think about it.
This conversation has been edited for length and clarity.
You work at a big tech company. Why did you feel compelled to study humanity, behavior, and identity?
My feeling about big AI models is that they are human intelligence, they’re not separate. There were a lot of people in the industry and in AI who thought that we would get to general purpose, powerful AI through systems that were very good at playing a really good game of chess or whatever. That turned out not to be the case. The way we finally got there is by literally modeling human interaction and content on the internet. The internet is obviously not a perfect mirror of us, it has many flaws. But it is basically humanity. It’s literally modeling humanity that yields general intelligence. That is both worrisome and reassuring. It’s reassuring that it’s not an alien. It’s all too familiar. And it’s worrisome because it inherits all of our flaws.
In an article you co-authored titled “The Illusion of AI’s Existential Risk,” you write that “harm and even massive death from misuse of (non-superintelligent) AI is a real possibility and extinction via superintelligent rogue AI is not an impossibility.” How worried should we be?
I’m an optimist, but also a worrier. My top two worries right now for humanity and for the planet are nuclear war and climate collapse. We don’t know if we’re dancing close to the edge of the cliff. One of my big frustrations with the whole AI existential risk conversation is that it’s so distracting from these things that are real and in front of us right now. More intelligence is actually what we need in order to address those very problems, not less intelligence.
The idea that somehow more intelligence is a threat feels to me like it comes more than anything else from our primate brains of dominance hierarchy. We are the top dog now, but maybe AI will be the top dog. And I just think this is such bullshit.
AI is so integral already to computers and it will become even more so in the coming years. I have a lot of concerns about democracy, disinformation and mass hacking, cyber warfare, and lots of other things. There’s no shortage of things to be concerned about. Very few of them strike me as being potential species enders. They strike me as things that we really have to think about with respect to what kind of lifestyle we want, how we want to live, and what our values are.
The biggest problem now is not so much how do we make AI models follow ethical injunctions as who gets to make those? What are the rules? And those are not so much AI problems as they are the problems of democracy and governance. They’re deep and we need to address them.
In that same article, you talk about AI’s disruptive dangers to society today, including the breakdown of social fabric and democracy. There also are concerns about the carbon footprint required to develop and maintain data centers, defamatory content and copyright infringement issues, and disruptions in journalism. What are the present dangers you see and do the benefits outweigh the potential harms?
We’re imagining that we’ll be able to really draw a distinction between AI content and non-AI content, but I’m not really sure that will be the case. In many cases, AI is going to be really helpful for people who don’t speak a language or who have sensory deficits or cognitive deficits. As more and more of us begin to work with AI in various ways, I think drawing those distinctions is going to become really hard. It’s hard for me to imagine that the benefits are not really big. But I can also imagine sort of conditions conspiring to make things work out poorly for us. We need to be distributing the gains that we’re getting from a lot of these technologies more broadly. And we need to be putting our money where our hearts are.
Is AI going to develop industries and jobs as opposed to making existing ones obsolete and replaceable?
The labor question is really complex and the jury is very much still out about how many jobs will be replaced, changed, improved, or created. We don’t know. But I’m not even sure that the terms of that debate are right. We wouldn’t be interested in a lot of these AI capabilities if they didn’t do stuff that is useful to us. But with capitalism configured the way it is, we are requiring that people do, “economically useful” work, or they don’t eat. Something seems to be screwy about this.
If we’re entering an era of potentially such abundance that a lot of people don’t have to work and yet the consequence of that is that a lot of people starve, something’s very wrong with the way we’ve set things up. Is that a problem with AI? Not really. But it’s certainly a problem that AI could bring about if the whole sociotechnical system is not changed. I don’t know that capitalism and labor as we’ve thought about it is sophisticated enough to deal with the world that we’ll be living in in 40 years’ time.
There has been some reporting that paints a picture of companies that are developing these technologies as divided between people who want to take it to the limit without much regard for potential consequences, and then those who are perhaps more sensitive to such concerns. Is that the reality of what you see in the industry?
Just like with other cultural wars issues, there’s a kind of polarization that is taking place. And the two poles are weird. One of them I would call AI existential risk. The other one I would call AI safety. And then there’s what I would almost call AI abolition or anti-AI movement—that on the one hand often claims that AI is neither artificial nor intelligent, it’s just a way to bolster capital at the expense of labor. It sounds almost religious, right? It’s either the rapture or the apocalypse. AI—it’s real. It’s not just some kind of party trick or hype. I get quite frustrated by a lot of the way that I see those concerns raised from both sides. It’s unfortunate because a lot of the real issues with AI are so much more nuanced and require much more care in how they’re analyzed.
Current and former employees at AI development companies, including at Google, signed a letter calling for whistleblower protections so that current and former employees can publicly raise concerns about the potential risks of these technologies. Do you worry that there isn’t enough transparency in the development of AI and should the public at large trust big companies and powerful individuals to sort of rein it in?
No. Should people trust corporations to just make everything better for everybody? Of course not. I think that the intentions of the corporations have often not really been the determinant of whether things go well or badly. It’s often very difficult to tell what the long-term consequences are going to be of a thing.
Think about the internet, which was the last really big change. I think AI is a bigger change than the internet. If we’d had the same conversation about the internet in 1992, should we trust the companies that are building the computers, the wire, and later on the fiber? Should we trust that they have our interests at heart? How should we hold them to account? What laws should be passed? Even with everything we know now, what could we have told humans 1992 to do? I’m not sure.
The internet was a mixed blessing. Some things probably should have been regulated differently. But none of the regulations we were thinking of at the time were the right ones. I think that a lot of our concerns at that time turned out to be the wrong concerns. I worry that we’re in a similar situation now. I’m not saying that I think we should not regulate AI. But when I look at the actual rules and policies being proposed, I have very low confidence that any of them will actually make life better for anybody in 10 years.
The Audacity of Elon Musk’s $180 Million Pledge to Elect Donald Trump
This story was originally published on Judd Legum’s Substack, Popular Information, to which you can subscribe here.
Billionaire Elon Musk has pledged to give $45 million a month to a Super PAC that’s encouraging swing-state supporters of former President Donald Trump to vote absentee, a practice Musk has publicly described as “insane,” “too risky,” and a recipe for “large-scale fraud.”
Trump has advanced a variety of wild conspiracy theories to falsely claim that he won the 2020 presidential election. One of his favorites is the false claim that supporters of President Joe Biden had stuffed dropboxes with falsified absentee ballots. Right-wing polemicist Dinesh D’Souza created a 90-minute documentary, 2000 Mules, devoted to the baseless allegation about absentee ballots. D’Souza’s documentary was so shoddy that its distributor, Salem Media Group, ultimately pulled the film from the market and issued an apology.
Numerous studies have found that voting by mail is “safe and secure.” A database maintained by the right-wing Heritage Foundation, which supports restrictions on mail-in voting, reported “1,200 cases of vote fraud of all forms” from 2000 to 2020. Of those cases, “204 involved the fraudulent use of absentee ballots.” This amounts to “one case [of fraud using mail-in ballots] per state every six or seven years,” or “about 0.00006 percent of total votes cast.”
Nevertheless, for months, Musk has used X to spread misinformation about absentee ballots to his nearly 190 million followers. Here’s a sampling:
On January 8, Musk wrote that it was “insane” that you can “mail in your ballot” in the United States. The same day, he said the government should require “in-person voting” on a single day, with few exceptions, “like other countries do.”
On February 2, Musk asserted that the Biden campaign was involved in a scheme to submit a massive number of fake absentee ballots.
On February 5, Musk said that absentee ballots are used for widespread fraud because the nature of absentee ballots made “fraud traceability impossible.”
On May 9, Musk falsely claimed that “widespread voting by mail” was “not allowed before the scamdemic” and now “proving fraud [is] almost impossible.”
On July 9, Musk said that we “should mandate paper ballots and in-person voting only” because “anything mailed in is too risky.” In another post the same day, Musk said that “[m]ail-in and drop box ballots should not be allowed” because they facilitate “large-scale fraud.”
Musk’s X account has become the social media equivalent of 2000 Mules, grasping to justify Trump’s lies about the last presidential election. Musk’s diatribes against absentee voting have occurred as Trump and Musk are reportedly “developing a friendly rapport and talk on the phone several times a month as the election nears.”
As a result, it’s not entirely surprising that, on Tuesday, Bloomberg reported that Musk would be donating $45 million a month to a Super PAC supporting Trump’s campaign. (Musk previously pledged not to donate to Trump or Biden.) The contributions to America PAC, which was formed in May, would make Musk the biggest financial backer of Trump in 2024 and one of the largest political donors of all time.
How will America PAC spend Musk’s money? According to the Wall Street Journal, it will focus on “persuading constituents to vote early and request mail-in ballots in swing states.” Already, America PAC has hired hundreds of workers who are “having conversations with constituents in swing states and urging voters to request mail-in ballots.”
The America PAC website encourages Trump supporters to “vote early in person or by mail.” It includes a link for voters to request an absentee ballot. So, on the one hand, Musk is telling millions of people that absentee ballots are “insane” and a vehicle for “large-scale fraud.” On the other hand, he may spend up to $180 million, by some estimates, to encourage more Trump supporters to vote absentee.
Musk’s large donations to America PAC will position it to aggressively exploit a new loophole in federal campaign finance law. A Super PAC can raise and spend unlimited amounts of money to support (or oppose) a federal candidate. But, as a general rule, it cannot coordinate directly with a candidate’s campaign. Earlier this year, however, the Federal Election Commission (FEC) created a significant exception.
In a March 20 advisory opinion, the FEC decided that “canvassing literature and scripts are not public communications, and as a result are not coordinated communications under Commission regulations.” That means that America PAC, which is focusing on canvassing to increase absentee voting in swing states, can coordinate its messaging directly with the Trump campaign.
The FEC made this determination based on the idea that canvassing is “a traditional grassroots activity fundamentally different” from mass mailings or television advertisements. Now this loophole will allow the Trump campaign to effectively control messaging backed by hundreds of millions of donations from Musk and a few other very wealthy people.
The FEC is composed of three Republican and three Democratic commissioners. For years, that has deadlocked the FEC, preventing it from issuing any significant rules or opinions. That has changed in recent months as Commissioner Dara Lindenbaum, a Democrat appointed by President Biden in 2022, has repeatedly sided with the three Republican commissioners to weaken campaign finance regulations.