MIT Technology Review https://www.technologyreview.com Tue, 20 Jun 2023 08:18:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.3 https://wp.technologyreview.com/wp-content/uploads/2020/01/20130408-ftweekendmag-mit-0030-final-w0-1.jpg?w=32?crop=0px,33px,1272px,716px&w=32px MIT Technology Review https://www.technologyreview.com 32 32 172986898 Meta’s AI leaders want you to know fears over AI existential risk are “ridiculous” https://www.technologyreview.com/2023/06/20/1075075/metas-ai-leaders-want-you-to-know-fears-over-ai-existential-risk-are-ridiculous/ Tue, 20 Jun 2023 08:18:11 +0000 https://www.technologyreview.com/?p=1075075 It’s a really weird time in AI. In just six months, the public discourse around the technology has gone from “Chatbots generate funny sea shanties” to “AI systems could cause human extinction.” Who else is feeling whiplash? 

My colleague Will Douglas Heaven asked AI experts why exactly people are talking about existential risk, and why now. Meredith Whittaker, president of the Signal Foundation (which is behind the private messaging app Signal) and a former Google researcher, sums it up nicely: “Ghost stories are contagious. It’s really exciting and stimulating to be afraid.” 

We’ve been here before, of course: AI doom follows AI hype. But this time feels different. The Overton window has shifted in discussions around AI risks and policy. What was once an extreme view is now a mainstream talking point, grabbing not only headlines but the attention of world leaders.

Read more from Will here

Whittaker is not the only one who thinks this. While influential people in Big Tech companies such as Google and Microsoft, and AI startups like OpenAI, have gone all in on warning people about extreme AI risks and closing up their AI models from public scrutiny, Meta is going the other way. 

Last week, on one of the hottest days of the year so far, I went to Meta’s Paris HQ to hear about the company’s recent AI work. As we sipped champagne on a rooftop with views to the Eiffel Tower, Meta’s chief AI scientist, Yann LeCun, a Turing Award winner, told us about his hobbies, which include building electronic wind instruments. But he was really there to talk about why he thinks the idea that a superintelligent AI system will take over the world is “preposterously ridiculous.” 

People are worried about AI systems that “are going to be able to recruit all the resources in the world to transform the universe into paper clips,” LeCun said. “That’s just insane.” (He was referring to the “paper clip maximizer problem,” a thought experiment in which an AI asked to make as many paper clips as possible does so in ways that ultimately harms humans, while still fulfilling its main objective.) 

He is in stark opposition to Geoffrey Hinton and Yoshua Bengio, two pioneering AI researchers (and the two other “godfathers of AI”), who shared the Turing prize with LeCun. Both have recently become outspoken about  existential AI risk.

Joelle Pineau, Meta’s vice president of AI research, agrees with LeCun. She calls the conversation ”unhinged.” The extreme focus on future risks does not leave much bandwidth to talk about current AI harms, she says. 

“When you start looking at ways to have a rational discussion about risk, you usually look at the probability of an outcome and you multiply it by the cost of that outcome. [The existential-risk crowd] have essentially put an infinite cost on that outcome,” says Pineau. 

“When you put an infinite cost, you can’t have any rational discussions about any other outcomes. And that takes the oxygen out of the room for any other discussion, which I think is too bad.”

While talking about existential risk is a signal that tech people are aware of AI risks, tech doomers have a bigger ulterior motive, LeCun and Pineau say: influencing the laws that govern tech. 

“At the moment, OpenAI is in a position where they are ahead, so the right thing to do is to slam the door behind you,” says LeCun. “Do we want a future in which AI systems are essentially transparent in their functioning or are … proprietary and owned by a small number of tech companies on the West Coast of the US?” 

What was clear from my conversations with Pineau and LeCun was that Meta, which has been slower than competitors to roll out cutting-edge models and generative AI in products, is banking on its open-source approach to give it an edge in an increasingly competitive AI market. Meta is, for example, open-sourcing its first model in keeping with LeCun’s vision of how to build AI systems with human-level intelligence

Open-sourcing technology sets a high bar, as it lets outsiders find faults and hold companies accountable, Pineau says. But it also helps Meta’s technologies become a more integral part of the infrastructure of the internet. 

“When you actually share your technology, you have the ability to drive the way in which technology will then be done,” she says. 

Deeper Learning

Five big takeaways from Europe’s AI Act

It’s crunch time for the AI Act. Last week, the European Parliament voted to approve its draft rules. My colleague Tate Ryan-Mosley has five takeaways from the proposal. The parliament would like the AI Act to include a total ban on real-time biometrics and predictive policing in public spaces, transparency obligations for large AI models, and a ban on the scraping of copyrighted material. It also classifies recommendation algorithms as “high risk” AI that requires stricter regulation. 

What happens next? This does not mean the EU is going to adopt these policies outright. Next, members of the European Parliament will have to thrash out details with the Council of the European Union and the EU’s executive arm, the European Commission, before the draft rules become law. The final legislation will be a compromise between three different drafts from the three institutions. European lawmakers are aiming to get the AI Act in final shape by December, and the regulation should be in force by 2026. 

You can read my previous piece on the AI Act here.

Bits and Bytes

A fight over facial recognition will make or break the AI Act
Whether to ban the use of facial recognition software in public places will be the biggest fight in the final negotiations for the AI Act. Members of the European Parliament want a complete ban on the technology, while EU countries want the freedom to use it in policing. (Politico)

AI researchers sign a letter calling for focus on current AI harms
Another open letter! This one comes from AI researchers at the ACM conference on Fairness, Accountability, and Transparency (FAccT), calling on policymakers to use existing tools to “design, audit, or resist AI systems to protect democracy, social justice, and human rights.” Signatories include Alondra Nelson and Suresh Venkatasubramanian, who wrote the White House’s AI Bill of Rights.

The UK wants to be a global hub for AI regulation
The UK’s prime minister, Rishi Sunak, pitched his country as the global home of artificial-intelligence regulation. Sunak’s hope is that the UK could offer a “third way” between the EU’s AI Act and the US’s Wild West. Sunak is hosting a AI regulation summit in London in the fall. I’m skeptical. The UK can try, but ultimately its AI companies will be forced to comply with the EU’s AI Act if they want to do business in the influential trading bloc. (Time

YouTube could give Google an edge in AI
Google has been tapping into the rich video repository of its video site YouTube to train its next large language model. This material could help Google train a model that can generate not only text but audio and video too. Apparently this is not lost on OpenAI, which has been secretly using YouTube data to train its AI models. (The Information

A four-week-old AI startup raised €105 million
Talk about AI hype. Mistral, a brand-new French AI startup with no products and barely any employees, has managed to raise €105 million in Europe’s largest-ever seed round. The founders of the company previously worked at DeepMind and Meta. Two of them were behind the team that developed Meta’s open-source Llama language model. (Financial Times)

]]>
1075075
How existential risk became the biggest meme in AI https://www.technologyreview.com/2023/06/19/1075140/how-existential-risk-became-biggest-meme-in-ai/ Mon, 19 Jun 2023 15:42:24 +0000 https://www.technologyreview.com/?p=1075140 Who’s afraid of the big bad bots? A lot of people, it seems. The number of high-profile names that have now made public pronouncements or signed open letters warning of the catastrophic dangers of artificial intelligence is striking.

Hundreds of scientists, business leaders, and policymakers have spoken up, from deep learning pioneers Geoffrey Hinton and Yoshua Bengio to the CEOs of top AI firms, such as Sam Altman and Demis Hassabis, to the California congressman Ted Lieu and the former president of Estonia Kersti Kaljulaid.

The starkest assertion, signed by all those figures and many more, is a 22-word statement put out two weeks ago by the Center for AI Safety (CAIS), an agenda-pushing research organization based in San Francisco. It proclaims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The wording is deliberate. “If we were going for a Rorschach-test type of statement, we would have said ‘existential risk’ because that can mean a lot of things to a lot of different people,” says CAIS director Dan Hendrycks. But they wanted to be clear: this was not about tanking the economy. “That’s why we went with ‘risk of extinction’ even though a lot of us are concerned with various other risks as well,” says Hendrycks.

We’ve been here before: AI doom follows AI hype. But this time feels different. The Overton window has shifted. What were once extreme views are now mainstream talking points, grabbing not only headlines but the attention of world leaders. “The chorus of voices raising concerns about AI has simply gotten too loud to be ignored,” says Jenna Burrell, director of research at Data and Society, an organization that studies the social implications of technology.

What’s going on? Has AI really become (more) dangerous? And why are the people who ushered in this tech now the ones raising the alarm?   

It’s true that these views split the field. Last week, Yann LeCun, chief scientist at Meta and joint recipient with Hinton and Bengio of the 2018 Turing Award, called the doomerism “preposterous.” Aidan Gomez, CEO of the AI firm Cohere, said it was “an absurd use of our time.”

Others scoff too. “There’s no more evidence now than there was in 1950 that AI is going to pose these existential risks,” says Signal president Meredith Whittaker, who is cofounder and former director of the AI Now Institute, a research lab that studies the social and policy implications of artificial intelligence. “Ghost stories are contagious—it’s really exciting and stimulating to be afraid.”

“It is also a way to skim over everything that’s happening in the present day,” says Burrell. “It suggests that we haven’t seen real or serious harm yet.”

An old fear

Concerns about runaway, self-improving machines have been around since Alan Turing. Futurists like Vernor Vinge and Ray Kurzweil popularized these ideas with talk of the so-called Singularity, a hypothetical date at which artificial intelligence outstrips human intelligence and machines take over. 

But at the heart of such concerns is the question of control: How do humans stay on top if (or when) machines get smarter? In a paper called “How Does Artificial Intelligence Pose an Existential Risk?” published in 2017, Karina Vold, a philosopher of artificial intelligence at the University of Toronto (who signed the CAIS statement), lays out the basic argument behind the fears.    

There are three key premises. One, it’s possible that humans will build a superintelligent machine that can outsmart all other intelligences. Two, it’s possible that we will not be able to control a superintelligence that can outsmart us. And three, it’s possible that a superintelligence will do things that we do not want it to.

Putting all that together, it is possible to build a machine that will do things that we don’t want it to—up to and including wiping us out—and we will not be able to stop it.   

There are different flavors of this scenario. When Hinton raised his concerns about AI in May, he gave the example of robots rerouting the power grid to give themselves more power. But superintelligence (or AGI) is not necessarily required. Dumb machines, given too much leeway, could be disastrous too. Many scenarios involve thoughtless or malicious deployment rather than self-interested bots. 

In a paper posted online last week, Stuart Russell and Andrew Critch, AI researchers at the University of California, Berkeley (who also both signed the CAIS statement), give a taxonomy of existential risks. These range from a viral advice-giving chatbot telling millions of people to drop out of college to autonomous industries that pursue their own harmful economic ends to nation-states building AI-powered superweapons.

In many imagined cases, a theoretical model fulfills its human-given goal but does so in a way that works against us. For Hendrycks, who studied how deep-learning models can sometimes behave in unexpected and undesirable ways when given inputs not seen in their training data, an AI system could be disastrous because it is broken rather than all-powerful. “If you give it a goal and it finds alien solutions to it, it’s going to take us for a weird ride,” he says.

The problem with these possible futures is that they rest on a string of what-ifs, which makes them sound like science fiction. Vold acknowledges this herself. “Because events that constitute or precipitate an [existential risk] are unprecedented, arguments to the effect that they pose such a threat must be theoretical in nature,” she writes. “Their rarity also makes it such that any speculations about how or when such events might occur are subjective and not empirically verifiable.”

So why are more people taking these ideas at face value than ever before? “Different people talk about risk for different reasons, and they may mean different things by it,” says François Chollet, an AI researcher at Google. But it is also a narrative that’s hard to resist: “Existential risk has always been a good story.”

“There’s a sort of mythological, almost religious element to this that can’t be discounted,” says Whittaker. “I think we need to recognize that what is being described, given that it has no basis in evidence, is much closer to an article of faith, a sort of religious fervor, than it is to scientific discourse.”

The doom contagion

When deep-learning researchers first started to rack up a series of successes—think of Hinton and his colleagues’ record-breaking image-recognition scores in the ImageNet competition in 2012 and DeepMind’s first wins against human champions with AlphaGo in 2015—the hype soon turned to doomsaying then too. Celebrity scientists, such as Stephen Hawking and fellow cosmologist Martin Rees, as well as celebrity tech leaders like Elon Musk, raised the alarm about existential risk. But these figures weren’t AI experts.   

Eight years ago, AI pioneer Andrew Ng, who was chief scientist at Baidu at the time, stood on a stage in San Jose and laughed off the entire idea. 

“There could be a race of killer robots in the far future,” Ng told the audience at Nvidia’s GPU Technology Conference in 2015. “But I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.” (Ng’s words were reported at the time by tech news website The Register.)

Ng, who cofounded Google’s AI lab in 2011 and is now CEO of Landing AI, has repeated the line in interviews since. But now he’s on the fence. “I’m keeping an open mind and am speaking with a few people to learn more,” he tells me. “The rapid pace of development has led scientists to rethink the risks.”

Like many, Ng is concerned by the rapid progress of generative AI and its potential for misuse. He notes that a widely shared AI-generated image of an explosion at the Pentagon spooked people last month so much that the stock market dropped.  

“With AI being so powerful, unfortunately it seems likely that it will also lead to massive problems,” says Ng. But he still stops short of killer robots: “Right now, I still struggle to see how AI can lead to our extinction.”

Something else that’s new is the widespread awareness of what AI can do. Earlier this year, ChatGPT brought this technology to the public. “AI is a popular topic in the mainstream all of a sudden,” says Chollet. “People are taking AI seriously because they see a sudden jump in capabilities as a harbinger of more future jumps.” 

The experience of conversing with a chatbot can also be unnerving. Conversation is something that is typically understood as something people do with other people. “It added a kind of plausibility to the idea that AI was human-like or a sentient interlocutor,” says Whittaker. “I think it gave some purchase to the idea that if AI can simulate human communication, it could also do XYZ.”

“That is the opening that I see the existential risk conversation sort of fitting into—extrapolating without evidence,” she says.

There’s reason to be cynical, too. With regulators catching up to the tech industry, the issue on the table is what sorts of activity should and should not get constrained. Highlighting long-term risks rather than short-term harms (such as discriminatory hiring or misinformation) refocuses regulators’ attention on hypothetical problems down the line.

“I suspect the threat of genuine regulatory constraints has pushed people to take a position,” says Burrell. Talking about existential risks may validate regulators’ concerns without undermining business opportunities. “Superintelligent AI that turns on humanity sounds terrifying, but it’s also clearly not something that’s happened yet,” she says.

Inflating fears about existential risk is also good for business in other ways. Chollet points out that top AI firms need us to think that AGI is coming, and that they are the ones building it. “If you want people to think what you’re working on is powerful, it’s a good idea to make them fear it,” he says.

Whittaker takes a similar view. “It’s a significant thing to cast yourself as the creator of an entity that could be more powerful than human beings,” she says.

None of this would matter much if it were simply about marketing or hype. But deciding what the risks are, and what they’re not, has consequences. In a world where budgets and attention spans are limited, harms less extreme than nuclear war may get overlooked because we’ve decided they aren’t the priority.

“It’s an important question, especially with the growing focus on security and safety as the narrow frame for policy intervention,” says Sarah Myers West, managing director of the AI Now Institute.

When Prime Minister Rishi Sunak met with heads of AI firms, including Sam Altman and Demis Hassabis, in May, the UK government issued a statement saying: “The PM and CEOs discussed the risks of the technology, ranging from disinformation and national security, to existential threats.”

The week before, Altman told the US Senate his worst fear was that the AI industry would cause significant harm to the world. Altman’s testimony helped spark calls for a new kind of agency to address such unprecedented harm.

With the Overton window shifted, is the damage done? “If we’re talking about the far future, if we’re talking about mythological risks, then we are completely reframing the problem to be a problem that exists in a fantasy world and its solutions can exist in a fantasy world too,” says Whittaker.

But Whittaker points out that policy discussions around AI have been going on for years, longer than this recent buzz of fear. “I don’t believe in inevitability,” she says. “We will see a beating back of this hype. It will subside.”

]]>
1075140
The Download: building anti-aging hype, and exploring the universe with sound https://www.technologyreview.com/2023/06/19/1075128/the-download-building-anti-aging-hype-and-exploring-the-universe-with-sound/ Mon, 19 Jun 2023 12:10:00 +0000 https://www.technologyreview.com/?p=1075128 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Police got called to an overcrowded presentation on “rejuvenation” technology

It’s not every day that police storm through the doors of a scientific session and eject half the audience. But that’s what happened on Friday at the Boston Convention and Exhibition Center during a round of scientific presentations featuring Juan Carlos Izpisua Belmonte, a specialist in “rejuvenation” technology at a secretive, wealthy, anti-aging startup called Altos Labs.

Police ordered anyone without a seat to clear out, after an overflow crowd began jostling in the aisles for space and violating the building’s fire code. The brouhaha shows how excitement is building as researchers uncover the secrets of life. Some, like Belmonte, claim they’ll eventually radically extend it, by 40 years or more. Read the full story.

—Antonio Regalado

How sounds can turn us on to the wonders of the universe

Astronomy should, in principle, be a welcoming field for blind researchers. But across the board, science is full of charts, graphs, databases, and images that are designed to be seen.

So researcher Sarah Kane, who is legally blind, was thrilled three years ago when she encountered a technology known as sonification, designed to transform information into sound. Since then she’s been working with a project called Astronify, which presents astronomical information in audio form. 

For millions of blind and visually impaired people, sonification could be transformative—opening access to education, to once unimaginable careers, and even to the secrets of the universe. Read the full story.

—Corey S. Powell

Corey’s story is from the forthcoming print edition of MIT Technology Review, which is all about accessibility. If you haven’t already, subscribe to make sure you don’t miss out on future stories—subscriptions start from just $80 a year.

Five big takeaways from Europe’s AI Act

Last week was a big one for tech policy in Europe, after the European Parliament voted to approve draft rules for its AI Act on the same day EU lawmakers filed a new antitrust lawsuit against Google.

The AI Act vote passed with an overwhelming majority, and has been heralded as one of the world’s most important developments in AI regulation. 

But don’t hold your breath: it’ll take around two years before the laws are actually implemented. Tate Ryan-Mosley, our senior tech policy reporter, has dug into the major takeaways you should know about. Read the full story.

This story is from The Technocrat, Tate’s weekly newsletter covering policy and politics in Silicon Valley. Sign up to receive it in your inbox every Friday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Chatbots could usher in a new way of conversing
As their ability to simulate chat rapidly improves, our expectations are shifting. (New Yorker $)
+ AI’s ability to produce and spread disinformation is frightening. (Wired $)
+ But AI could also prove a useful validation tool. (FT $) 
+ Can’t be bothered to attend an event? Send an AI version of yourself instead. (NYT $)
+ The inside story of how ChatGPT was built from the people who made it. (MIT Technology Review)

2 Intel has agreed to build a chip factory in Israel
Chipmakers are increasingly keen to diversify their supply chains beyond China. (Bloomberg $)
+ It’s invested billions in a new German factory, too. (Economist $)
+ Meanwhile, Israel is working on a fiber-optic cable linking Europe and Asia. (Reuters)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)

3 Xi Jinping wants to supercharge China’s military with tech
Which isn’t going to improve its already fraught relationship with western powers. (FT $)
+ China’s economy has had a roller coaster six months. (Economist $)
+ Why business is booming for military AI startups. (MIT Technology Review)

4 How Meta fumbled its early AI lead
Academic breakthroughs took priority over consumer-facing products. (WSJ $)
+ Scams are scarily rife on Meta’s platforms. (The Guardian)

5 People are taking abortion medication later in pregnancy
Laws restricting access to the pills are leaving some with no choice. (Vox)
+ The cognitive dissonance of watching the end of Roe unfold online. (MIT Technology Review)

6 Reddit’s communities are dying
The site’s management used to fear upsetting users. Now, they don’t seem to care. (NY Mag $)+ Its CEO is happy to take the criticism, it seems. (The Verge)

7 What is crypto for, exactly?
A lot of its assets appear primed for activities on the wrong side of the law. (The Atlantic $)
+ It’s okay to opt out of the crypto revolution. (MIT Technology Review)

8 This star has been eating a planet for decades
FU Orionis is on course to fully consume it sometime in the next 300 years. (New Scientist $)

9 San Francisco is getting really into recycling water 💧
A word of warning: don’t drink it. (Wired $)

10 Road tripping with an EV isn’t without its challenges 🚘
Come prepared. (The Information $)
+ And expect to jostle for a charger. (The Atlantic $)

Quote of the day

“What’s the deal with airplane food? The flavors are so plain. And the prices are sky-high.”

—Comedian Geulah Finman reads out a joke written by ChatGPT during an AI comedy night in San Francisco, the Washington Post reports.

The big story

The future of urban housing is energy-efficient refrigerators

June 2022

The aging apartments under the purview of the New York City Housing Authority don’t scream innovation. The largest landlord in the city, housing nearly 1 in 16 New Yorkers, NYCHA has seen its buildings literally crumble after decades of neglect. It would require at least $40 billion to return the buildings to a state of good repair.

Despite the scale of the challenge, NYCHA is hoping to fix them. It has launched a Clean Heat for All Challenge which asks manufacturers to develop low-cost, easy-to-install heat-pump technologies for building retrofits. The stakes for the agency, the winning company, and for society itself could be huge—and good for the planet. Read the full story.

—Patrick Sisson

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Neanderthals came up with an ingenious idea for an early form of glue: tar from trees.
+ A Mac OS desktop blanket? I’m into it.
+ These intense chess photographs are just fantastic.
+ Despite its interest in the macabre, goth is one subculture that will never die.
+ I bet you can’t guess how Japanese calligraphy ink is made.

]]>
1075128
How sounds can turn us on to the wonders of the universe https://www.technologyreview.com/2023/06/19/1074049/universe-sonification/ Mon, 19 Jun 2023 09:00:00 +0000 https://www.technologyreview.com/?p=1074049

In the cavernous grand ballroom of the Seattle Convention Center, Sarah Kane stood in front of an oversize computer monitor, methodically reconstructing the life history of the Milky Way. Waving her shock of long white hair as she talked (“I’m easy to spot from a distance,” she joked), she outlined the “Hunt for Galactic Fossils,” an ambitious research project she’d recently led as an undergraduate at the University of Pennsylvania. By measuring the composition, temperature, and surface gravity of a huge number of stars, she’d been able to pick out 689 of them that don’t look like the others. Those celestial outliers apparently formed very early in the history of the universe, when conditions were much different from those today. Identifying the most ancient stars, Kane explained, will help us understand the evolution of our galaxy as a whole. 

Kane’s presentation, which took place at the January 2023 meeting of the American Astronomical Society, unfolded smoothly, with just two small interruptions. Once she checked to make sure nobody was disturbing her guide dog. The other time, she asked one of the onlookers to help her highlight the correct chart on the computer screen, “since of course I can’t see the cursor.” 

Astronomy should, in principle, be a welcoming field for a legally blind researcher like Kane. We are long past the era of observers huddling at the eyepiece of a giant telescope. Today, most astronomical studies begin as readings of light broken down by intensity and wavelength, digitized and sorted in whatever manner proves most useful. But astronomy’s accessibility potential remains largely theoretical; across the board, science is full of charts, graphs, databases, and images that are designed specifically to be seen. So Kane was thrilled three years ago when she encountered a technology known as sonification, designed to transform information into sound. Since then she’s been working with a project called Astronify, which presents astronomical information in audio form. “It is making data accessible that wouldn’t otherwise be,” Kane says. “I can listen to a sonification of a light curve and understand what’s going on.”

Sonification and data accessibility were recurring themes at the Seattle astronomy meeting. MIT astrophysicist Erin Kara played sonic representations of light echoing off hot gas around a black hole. Allyson Bieryla from the Harvard-Smithsonian Center for Astrophysics presented sonifications designed to make solar eclipses accessible to the blind and visually impaired (BVI) community. Christine Limbfrom Lincoln Universitydescribed a proposal to incorporate sonification into astronomical data collected by the $600 million Rubin Observatory in Chile, scheduled to open in 2024. The meeting was just a microcosm of a bigger trend in science accessibility. “Astronomy is a leading field in sonification, but there’s no reason that work couldn’t be generalized,” Kane says. 

Sure enough, similar sonification experiments are underway in chemistry, geology, and climate science. High schools and universities are exploring the potential of auditory data displays for teaching math. Other types of sonification could assist workers in hazardous and high-stress occupations, or make urban environments easier to navigate. For much of the public, these innovations will be add-ons that could improve quality of life. But in the United States alone, an estimated 1 million people are blind and another 6 million are visually impaired. For these people, sonification could be transformative. It could open access to education, to once unimaginable careers, even to the secrets of the universe. 


Visual depictions of statistical data have a deep history, going back at least to 1644, when the Dutch astronomer Michael Florent van Langren created a graph showing different estimates of the distance in longitude between Rome and Toledo, Spain. Over the centuries, mathematicians and scientists have developed graphical standards so familiar that nobody stops to think about how to interpret a trend line or a pie chart. Proper sonification of data, on the other hand, did not begin until the 20th century: the earliest meaningful example was the Geiger counter, perfected in the 1920s, its eerie clicks signifying the presence of dangerous ionizing radiation. More recently, doctors embraced sound to indicate specific medical readings; the beep-beep of an electrocardiogram is perhaps the most iconic (unless you count Monty Python’s medical device that goes bing!). Current applications of sonic display are still mostly specialized, limited in scope, or both. For instance, physicists and mathematicians occasionally use audio analysis, but mostly to express technical operations such as sorting algorithms. At the consumer end, many modern cars produce sounds to indicate the presence of another vehicle in the driver’s blind spot, but those sonifications are specific to one problem or situation. 

“Astronomy is a leading field in sonification, but there’s no reason that work couldn’t be generalized.”

Sarah Kane

Niklas Rönnberg, a sonification expert at Linköping University in Sweden, has spent years trying to figure out how to get sound-based data more widely accepted, both in the home and in the workplace. A major obstacle, he argues, is the continued lack of universal standards about the meaning of sounds. “People tend to say that sonification is not intuitive,” he laments. “Everyone understands a line graph, but with sound we are struggling to reach out.” Should large numbers be indicated by high-pitched tones or deep bass tones, for example? People like to choose personalized tones for something as simple as a wake-up alarm or a text-message notification; getting everyone to agree on the meaning of sounds linked to dense information such as, say, the weather forecast for the next 10 days is a tall order. 

Bruce Walker, who runs the Sonification Lab at Georgia Tech University, notes another barrier to acceptance: “The tools have not been suitable to the ecosystems.” Auditory display makes no sense in a crowded office or a loud factory, for instance. At school, sound-based education tools are unworkable if they require teachers to add speakers and sound cards to their computers, or to download proprietary software that may not be compatible or that might be wiped away by the next system update. Walker lays some of the blame at the feet of researchers like himself. “Academics are just not very good at tech transfer,” he says. “Often we have these fantastic projects, and they just sit on the shelf in somebody’s lab.”

Yet Walker thinks the time is ripe for sonification to catch on more widely. “Almost everything nowadays can make sound, so we’re entering a new era,” he says. “We might as well do so in a way that’s beneficial.” 

Seizing that opportunity will require being thoughtful about where sonification is useful and where it is counterproductive. For instance, Walker opposes adding warning sounds to electric vehicles so they’re easier to hear coming. The challenge, he argues, is to make sure EVs are safe around pedestrians without adding more noise pollution: “The quietness of an electric car is a feature, not a defect.”


There is at least one well-proven path to getting the general public excited about data sonification. Decades before Astronify came along, some astronomers realized that sound is a powerful way to communicate the wonder of the cosmos to a wide audience. 

Bill Kurth, a space physicist at the University of Iowa, was an early proponent of data sonification for space science. Starting in the 1970s, he worked on data collected by NASA’s Voyager probes as they flew past the outer planets of the solar system. Kurth studied results from the probes’ plasma instruments (which measured the solar wind crashing into planetary atmospheres and magnetic fields) and started translating the complex, abstract signals into sound to understand them better. He digitized a whole library of “whistlers,” which he recognized as radio signals from lightning discharges on Jupiter—the first evidence of lightning on another world. 

“I was hearing clumps where the sounds were in harmony with each other. I was hearing solos from the various wavelengths of light.”

Kimberly Arcand

In the late 1990s, Kurth began experimenting with ways to translate those sounds of space into versions that would make sense to a non-expert listener. The whistles and pops of distant planets caught the public imagination and became a staple of NASA press conferences. 

Since then, NASA has increasingly embraced sonification to bring its publicly funded (and often expensive) cosmological discoveries to the masses. One of the leaders in that effort is Kimberly Arcand at the Harvard-Smithsonian Center for Astrophysics. For the past five years, she has worked with NASA to develop audio versions of results from the Chandra X-ray Observatory, a Hubble-like space telescope that highlights energetic celestial objects and events, such as cannibal stars and supernova explosions. 

Arcand’s space sonifications operate on two levels. To trained astronomers, they express well-defined data about luminosity, density, and motion. To the lay public, they capture the dynamic complexity of space scenes that are hard to appreciate from visuals alone. Radio shows and television news picked up these space soundscapes, sharing them widely. More recently, the sonifications became staples on YouTube and Soundcloud; collectively, they’ve been heard at least hundreds of millions of times. Just this spring, Chandra’s greatest hits were released as a vinyl LP, complete with its own record-release party. 

“The first time I heard our finished Galactic Center data sonification, I experienced that data in a completely different way. I was hearing clumps where the sounds were in harmony with each other. I was hearing solos from the various wavelengths of light,” Arcand says. Researchers in other fields are increasingly embracing her approach. For instance, Stanford researchers have converted 1,200 years of climate data into sound in order to help the public comprehend the magnitude and pace of global warming. 


Arcand’s short, accessible astronomy sonifications have been great for outreach to the general public, but she worries that they’ve had little impact in making science more accessible to blind and visually impaired people. (“Before I started as an undergrad, I hadn’t even heard them,” Kane confesses.) To assess the broader usefulness of her work, Arcand recently conducted a study of how blind or visually impaired people and non-BVI people respond to data sonification. The still-incomplete results indicate similar levels of interest and engagement in both groups. She takes that as a sign that such sonifications have a lot of untapped potential for welcoming a more diverse population into the sciences.  

The bigger challenge, though, is what comes next: pretty sounds, like pretty pictures, are not much help for people with low vision who are drawn in by the outreach but then want to go deeper and do research themselves. In principle, astronomy could be an exceptionally accessible field, because it relies so heavily on pure data. Studying the stars does not necessarily involve lab work or travel. Even so, only a handful of BVI astronomers have managed to break past the barriers. Enrique Pérez Montero, who studies galaxy formation and does community outreach at Spain’s Instituto de Astrofísica de Andalucía, is one of a handful of success stories. Nicolas Bonne at the University of Portsmouth in the UK is another; he now develops both sound-based and tactile techniques for sharing his astronomical work. 

Wanda Díaz-Merced is probably the world’s best-known BVI astronomer. But her career illustrates the magnitude of the challenges. She gradually lost her eyesight in her adolescence and early adulthood. Though she initially wondered whether she would be able to continue her studies, she persisted, and in 2005 she got an internship at NASA’s Goddard Space Flight Center, where she ended up collaborating with the computer scientist Robert Candey to develop data-sonification tools. Since then, she has continued her work at NASA, the University of Glasgow, the Harvard-Smithsonian Center for Astrophysics, the European Gravitational Observatory, the Astroparticle and Cosmology Laboratory in Paris, and the Universidad del Sagrado Corazón in Puerto Rico. At every step, she’s had to make her own way. “I’ve found sonification useful for all the data sets I’ve been able to analyze, from the solar wind to cosmic rays, radio astronomy, and x-ray data, but the accessibility of the databases is really bad,” she says. “Proposals for mainstreaming sonification are never approved—at least not the ones I have written.”

Jenn Kotler, a user experience designer at the Space Telescope Science Institute (STScI), became obsessed with this problem after hearing a lecture by Garry Foran, a blind chemist who reinvented himself as an astronomer using early sonification tools. Kotler wondered if she could do better and, in collaboration with two colleagues, applied for a grant from STScI to develop a dedicated kit for converting astronomical data into sound. They were funded, and in 2020, just as the covid pandemic began, Kotler and company began building what became Astronify. 

“Our goal with Astronify was to have a tool that allows people to write scripts, pull in the data they’re interested in, and sonify it according to their own parameters,” Kotler says. One of the simplest applications would be to translate data indicating the change in brightness of an object, such as when a planet passes in front of a distant star, with decreased brightness expressed as lower pitch. After hearing concerns about the lack of standards on what different types of sounds should indicate, Kotler worked with a panel of blind and visually impaired test users. “As soon as we started developing Astronify, we wanted them involved,” she says. It was the kind of community input that had mostly been lacking in earlier, outreach-oriented sonifications designed by sighted researchers and primarily aimed at sighted users. 

Astronify is now a complete, freely available open-source package. So far its user base is tiny (fewer than 50 people, according to Kotler), but she sees Astronify as a crucial step toward much broader accessibility in science. “It’s still so early with sonification, and frankly not enough actual research is being done about how best to use it,” she says.

In principle, astronomy could be an exceptionally accessible field, because it relies so heavily on pure data. Even so, only a handful of BVI astronomers have managed to break past the barriers.

One of her goals is to expand her sonification effort to create auditory “thumbnails” of all the different types of data stored in the Mikulski Archive for Space Telescopes, a super-repository that includes results from the Hubble and James Webb space telescopes along with many other missions and data archives. Making that collection searchable via sound would greatly improve the accessibility of a leading data science repository, Kotler notes, and would establish a template for other fields to follow.

Kotler also shares ideas with like-minded researchers and data scientists (such as James Trayford at the University of Portsmouth, who has collaborated with Bonne on a sonification package called STRAUSS) through a three-year-old international organization called Sonification World Chat. Arcand participates as well, seeking ways to apply the intuitive nature of her cosmic outreach to the harder task of making research data accessible to the BVI community. She notes that sonification is especially useful for interpreting any measurement that changes over time—a type of data that exists in pretty much every research field. “Astronomy is the main chunk of folks in the chat, but there are people from geology, oceanography, and climate change too,” she says. 

NASA researchers have translated data from the Crab Nebula into sound. Panning across the image, each wavelength of light has been assigned to a different family of instruments. Light from the top of the image plays at a higher pitch, and brighter light sounds louder.

The broader goal of groups like Sonification World Chat is to tear down the walls between tools like Astronify, which are powerful but useful only to a specialized community, and general-purpose sonifications like spoken GPS on phones, which are beneficial to a wide variety of people but only in very limited ways. 

Rönnberg focuses a lot of his attention on dual-use efforts where data sonification is broadly helpful in a specific setting or occupation but could have accessibility applications as a side effect. In one project, he has explored the potential of sonified data for air traffic control, collaborating with the Air Navigation Services of Sweden. His team experimented with sounds to indicate when an airplane is entering a certain controller’s sector, for instance, or to provide 360-degree awareness that is difficult to convey visually. Thinking about a more familiar transportation issue, Rönnberg is working on a test project for sonified buses that identify themselves and indicate their route as they pull in to a stop. Additional sonic displays could mark the locations of the different doors and indicate which ones are accessible, a feature useful to passengers whether they see well or not. 

Dual use is also a guiding theme for Kyla McMullen, who runs the SoundPAD Lab at the University of Florida (the PAD stands for “perception, application, and development”). She is working with the Gainesville Fire Department to test a system that uses sound to help firefighters navigate through smoke-filled buildings. In that situation, everyone is visually impaired. Like Rönnberg, McMullen sees a huge opportunity for data sonification to make urban environments more accessible. Another of her projects builds on GPS, adding three-­dimensional sounds—signals that seem to originate from a specific direction. The goal is to create sonic pointers to guide people intuitively through an unfamiliar location or neighborhood. “Mobility is a big area for progress—number one on my list,” she says.  

Walker, who has been working on data sonification for more than three decades, is trying to make the most of changing technology. “What we’re seeing,” he says, “is we develop something that becomes more automated or easier to use, and then as a result, it makes it easier for people with disabilities.” He has worked with Bloomberg to display auditory financial data on the company’s terminals, and with NASA to create standards for a sonified workstation. Walker is also exploring ways to make everyday tech more accessible. For instance, he notes that the currently available screen readers for cell phones fail to capture many parts of the social media experience. So he is working with one of his students to generate sonified emojis “to convey the actual emotion behind a message.” Last year they tested the tool with 75 sighted and BVI subjects, who provided mostly positive feedback.

Education may be the most important missing link between general-purpose assistive sounds and academic-oriented sonification. Getting sound into education hasn’t been easy, Walker acknowledges, but he thinks the situation is getting better here, too. “We’re seeing many more online and web-based tools, like our Sonification Studio, that don’t require special installations or a lot of technical support. They’re more like ‘walk up and use,’” he says. “It’s coming.” 

Sonification Studio generates audio versions of charts and graphs for teaching or for analysis. Other prototype education projects use sonification to help students understand protein structures and human anatomy. At the most recent virtual meeting of the Sonification World Chat, members also presented general-purpose tools for sonifying scientific data and mathematical formulas, and for teaching BVI kids basic skills in data interpretation. Phia Damsma, who oversees the World Chat’s learning group, runs an Australian educational software company that focuses on sonification for BVI students. The number of such efforts has increased sharply over the past decade: in a paper published recently in Nature, Anita Zanella at Italy’s Istituto Nazionale di Astrofisica and colleagues identified more than 100 sonification-based research and education projects in astronomy alone.


These latest applications of sonification are quickly getting real-world tests, aided by the proliferation of cloud-based software and ubiquitous sound-making computers, phones, and other devices. Díaz-Merced, who has struggled for decades to develop and share her own sonification tools, finally perceives signs of genuine progress for scientists from the BVI community. “There is still a lot of work to do,” she says. “But little by little, with scientific research on multisensorial perception that puts the person at the center, that work is beginning.”

Kane has used Astronify mainly as a tester, but she’s inspired to find that the sonified astronomical data it generates are also directly relevant to her galactic studies and formatted in a standard scientific software package, giving her a type of access that did not exist just three years ago. By the time she completes her PhD, she could be testing and conducting research with sonification tools that are built right into the primary research databases in her field. “It makes me feel hopeful that things have gotten so much better within my relatively short lifetime,” she says. “I’m really excited to see where things will go next.” 

Corey S. Powell is a science writer, editor, and publisher based in Brooklyn, NY. He is the cofounder of OpenMind magazine.

]]>
1074049
Five big takeaways from Europe’s AI Act https://www.technologyreview.com/2023/06/19/1075063/five-big-takeaways-from-europes-ai-act/ Mon, 19 Jun 2023 09:00:00 +0000 https://www.technologyreview.com/?p=1075063 This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

It was a big week in tech policy in Europe with the European Parliament’s vote to approve its draft rules for the AI Act on the same day EU lawmakers filed a new antitrust lawsuit against Google

The AI Act vote passed with an overwhelming majority, and has been heralded as one of the world’s most important developments in AI regulation. The European Parliament’s president, Roberta Metsola, described it as “legislation that will no doubt be setting the global standard for years to come.” 

Don’t hold your breath for any immediate clarity, though. The European system is a bit complicated. Next, members of the European Parliament will have to thrash out details with the Council of the European Union and the EU’s executive arm, the European Commission, before the draft rules become legislation. The final legislation will be a compromise between three different drafts from the three institutions, which vary a lot. It will likely take around two years before the laws are actually implemented.

What Wednesday’s vote accomplished was to approve the European Parliament’s position in the upcoming final negotiations. Structured similarly to the EU’s Digital Services Act, a legal framework for online platforms, the AI Act takes a “risk-based approach” by introducing restrictions based on how dangerous lawmakers predict an AI application could be. Businesses will also have to submit their own risk assessments about their use of AI. 

Some applications of AI will be banned entirely if lawmakers consider the risk “unacceptable,” while technologies deemed “high risk” will have new limitations on their use and requirements around transparency. 

Here are some of the major implications:

  1. Ban on emotion-recognition AI. The European Parliament’s draft text bans the use of AI that attempts to recognize people’s emotions in policing, schools, and workplaces. Makers of emotion-recognition software claim that AI is able to determine when a student is not understanding certain material, or when a driver of a car might be falling asleep. The use of AI to conduct facial detection and analysis has been criticized for inaccuracy and bias, but it has not been banned in the draft text from the other two institutions, suggesting there’s a political fight to come.
  2. Ban on real-time biometrics and predictive policing in public spaces. This will be a major legislative battle, because the various EU bodies will have to sort out whether, and how, the ban is enforced in law. Policing groups are not in favor of a ban on real-time biometric technologies, which they say are necessary for modern policing. Some countries, like France, are actually planning to increase their use of facial recognition
  3. Ban on social scoring. Social scoring by public agencies, or the practice of using data about people’s social behavior to make generalizations and profiles, would be outlawed. That said, the outlook on social scoring, commonly associated with China and other authoritarian governments, isn’t really as simple as it may seem. The practice of using social behavior data to evaluate people is common in doling out mortgages and setting insurance rates, as well as in hiring and advertising. 
  4. New restrictions for gen AI. This draft is the first to propose ways to regulate generative AI, and ban the use of any copyrighted material in the training set of large language models like OpenAI’s GPT-4. OpenAI has already come under the scrutiny of European lawmakers for concerns about data privacy and copyright. The draft bill also requires that AI generated content be labeled as such. That said, the European Parliament now has to sell its policy to the European Commission and individual countries, which are likely to face lobbying pressure from the tech industry.
  5. New restrictions on recommendation algorithms on social media. The new draft assigns recommender systems to a “high risk” category, which is an escalation from the other proposed bills. This means that if it passes, recommender systems on social media platforms will be subject to much more scrutiny about how they work, and tech companies could be more liable for the impact of user-generated content.

The risks of AI as described by Margrethe Vestager, executive vice president of the EU Commission, are widespread. She has emphasized concerns about the future of trust in information, vulnerability to social manipulation by bad actors, and mass surveillance. 

“If we end up in a situation where we believe nothing, then we have undermined our society completely,” Vestager told reporters on Wednesday.

What I am reading this week

  • A Russian soldier surrendered to a Ukrainian assault drone, according to video footage published by the Wall Street Journal. The surrender took place back in May in the eastern city of Bakhmut, Ukraine. The drone operator decided to spare the life of the soldier, according to international law, upon seeing his plea via video. Drones have been critical in the war, and the surrender is a fascinating look at the future of warfare. 
  • Many Redditors are protesting changes to the site’s API that would eliminate or reduce the function of third-party apps and tools many communities use. In protest, those communities have “gone private,” which means that the pages are no longer publicly accessible. Reddit is known for the power it gives to its user base, but the company may now be regretting that, according to Casey Newton’s sharp assessment
  • Contract workers who trained Google’s large language model, Bard, say they were fired after raising concerns about their working conditions and safety issues with the AI itself. The contractors say they were forced to meet unreasonable deadlines, which led to concerns about accuracy. Google says the responsibility lies with Appen, the contract agency employing the workers. If history tells us anything, there will be a human cost in the race to dominate generative AI. 

What I learned this week

This week, Human Rights Watch released an in-depth report about an algorithm used to dole out welfare benefits in Jordan. The agency found some major issues with the algorithm, which was funded by the World Bank, and says the system was based on incorrect and oversimplified assumptions about poverty. The report’s authors also called out the lack of transparency and cautioned against similar projects run by the World Bank. I wrote a short story about the findings

Meanwhile, the trend toward using algorithms in government services is growing. Elizabeth Renieris, author of Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse, wrote to me about the report, and emphasized the impact these sort of systems will have going forward: “As the process to access benefits becomes digital by default, these benefits become even less likely to reach those who need them the most and only deepen the digital divide. This is a prime example of how expansive automation can directly and negatively impact people, and is the AI risk conversation that we should be focused on now.”

]]>
1075063
Police got called to an overcrowded presentation on “rejuvenation” technology https://www.technologyreview.com/2023/06/17/1075097/got-rejuvenation-better-call-security/ Sat, 17 Jun 2023 17:17:54 +0000 https://www.technologyreview.com/?p=1075097 It’s not every day that police storm through the doors of a scientific session and eject half the audience.

But that is what occurred on Friday at the Boston Convention and Exhibition Center during a round of scientific presentations featuring Juan Carlos Izpisua Belmonte, a specialist in “rejuvenation” technology at a secretive, wealthy anti-aging startup called Altos Labs.

Interrupting another speaker mid-phrase, officers loudly ordered anyone without a seat to clear out, after an overflow crowd began jostling in the aisles for space.

“You’re not getting back in,” a conference official told the crowd of PhD students and postdocs who began milling around the doors after being escorted from the room.

The brouhaha shows how excitement is building as researchers uncover the secrets of life and some, like Belmonte, claim they will eventually use molecular technology to radically extend life span by 40 years or more, he has said. 

The meeting in Boston wasn’t even about defeating aging. It was a convention of specialists on stem cells. The idea of these researchers is to mimic, in the lab, the way human cells develop during pregnancy into their specialized roles. The results already include organoids that grow to resemble fetal brains, as well as manufactured retina cells that have been injected into the eyes of blind people, with promising early results. 

However, while the stem-cell researchers want to copy the molecular programs that bodies use to develop, new discoveries could eventually let researchers press rewind on that same process, and thus make old animals younger.

“This is almost the ultimate feat for an engineer: the reversal of the life process,” said Haifan Lin, a Yale University cell biologist and president of the International Society for Stem Research, which organized the meeting.

And that explains the boisterous attendees, Lin told me later in the day. “I apologize if there was a disruption. But take a step back,” he added. “It’s a good sign for this field that there is so much interest. It’s a hot topic. Hotter than we expected.”

Altos Labs

After witnessing the roiling crowd of researchers on Friday, it’s easy to imagine riots in the streets if science ever actually discovers the cure for aging—which at first would surely be an ultra-expensive remedy for the rich.

Just how close science is to age reversal is what the crowd had come to hear. They also hoped to catch a glimpse of Izpisua Belmonte, the figurehead for a new technological concept for reversing aging called “cellular reprogramming.”

The Spanish scientist, usually seen in his signature blue sport coat, has led efforts to try to rejuvenate entire animals, or parts of them, since 2016, when he reported that sick mice lived 30% longer than expected after receiving a cocktail of special reprogramming proteins.

His ideas rocketed to new prominence two years ago when he was recruited by Altos Labs, a company set up by billionaires to pursue what they called rejuvenation technology. Altos, with an eye-popping $3 billion in startup funds, is among the best-funded biomedical startups of all time, if not the richest of them all.  

You can think of Altos as a biomedical version of OpenAI, the software company releasing intelligent-seeming chatbots. Like OpenAI, Altos has amassed technical talent and financial resources, and it attracts overwhelming hype as it pursues technology that could fundamentally transform society.  

Altos has ample funding to investigate rejuvenation and, if possible, corner the market on the most promising approaches. The company has established institutes, in Cambridge, UK; San Diego; and the San Francisco Bay area. Wolf Reik, leader of the Cambridge institute, also spoke during the Boston event and mentioned the “very beautiful building” Altos occupies there. He showed a photo of workers lined up in an atrium and referred to them as “many happy people. Happy people in lab coats.”

Reik was kidding, but not really. Unlike workers at universities, Altos researchers don’t have to spend time applying for grants. Altos pays its top staff salaries of a million dollars and more and doubles what junior scientists can earn. It’s an enviable place to do science, but one with a commercial mission. Reik said that last month his group had filed its first patent application on its discoveries.

During his talk, Izpisua Belmonte, who heads Altos’s San Diego outpost, reviewed evidence—both published and unpublished—that he says supports the phenomenon of rejuvenation, or de facto age reversal of tissues.

It all has to do with the “epigenome”—the series of chemical controls on and around our genes that determine which are active and which are not. These controls can modulate individual genes or large stretches of chromosomes, putting “open for business” signs in some areas while others are tightly wound and packed away like a pair of earphones jammed deep in a pocket.  

Broadly speaking, Izpisua Belmonte says, he believes “dysregulation” of these control systems is a fundamental process that underlies aging and many diseases.

To rejuvenate cells, he has been exploring a way to reprogram or reset the epigenome. During his talk, he raced through examples of how reprogrammed cells become more resilient to stress and damage. On the whole, they appear to act younger.

In one experiment, for example, his lab gave mice ultra-high doses of the painkiller acetaminophen that are usually fatal, he says. Yet if the mice are given a reprogramming treatment, which consists of special proteins called Yamanaka factors, half will survive. “We reduce the mortality about 50%, more or less,” he says.

He also described experiments where mutant mice were allowed to gobble high-fat food. They became obese, but not if they were given a brief dose of the same reprogramming proteins. Somehow, he said, the procedure can “prevent the increase in the fatty tissue.”

So how is it that reprogramming can have such very different, but very helpful, effects on mice? That is the mystery he’s trying to unravel. “I could go on and on and on about the … examples we’ve been using in the lab these last years,” Izpisua Belmonte said. “You have to agree with me that this is a little strange, having one medicine that can cure all these things. “

So is this what the fountain of youth looks like? Many researchers remain skeptical and some say Izpisua Belmonte’s dramatic claims should come with more proof. On Twitter, biologist Lluis Montoliu cautioned against “unjustified hype” and said researchers should “wait to see” scientific publications.

Junk DNA

Even as police kept onlookers away from the door, Izpisua Belmonte unspooled evidence for what he says is a second way to produce rejuvenation results, one that Altos is also pursuing.

Some researchers suspect aging could cause our cells to lose control over some of the so-called junk DNA that makes up 45% of our genomes. This DNA is the residue of genes known as transposable elements, or jumping genes, which are able to copy themselves, a bit like a virus.  

The role of these parasitic genetic elements remains mysterious. They may be useful in some ways, helping us evolve by mix-and-matching pieces of genetic code, but they’re also eyed as the cause of health problems.

“There’s a good side to it, but so far it’s mostly bad,” says Lin, whose lab has studied how our cells continuously try to suppress transposable elements.

It’s known that as we age, our ability to silence these elements appears to gradually wane, for several reasons including changes to our epigenome, which helps to keep them in check. Some researchers describe a nearly constant battle between jumping genes and the epigenome, a battle cells start to lose as the years go by. 

To test the connection, Izpisua Belmonte told the audience, he has been using genetic drugs to artificially suppress these elements, especially one called LINE-1, which on its own accounts for around 18% of the human genetic code.  

After doing so, he claims, he can get very similar rejuvenation effects as with reprogramming technology. For instance, according to unpublished data, the cartilage of mice can be “rejuvenated” by reprogramming or silencing the effects of transposable elements, he says.

These big claims will need to be confirmed, but one scientist I spoke to said he thought Izpisua Belmonte may well have the tiger by the tail. “Working at Altos, they are under pressure to deliver,” says Rudolf Jaenisch, a professor at MIT and the Whitehead Institute. “But he clearly has the right questions in mind. These transposable elements are underappreciated in aging and how they shape the genome.” 

So has Altos gotten closer to the fountain of youth—and to a drug intervention that could turn back the clock? Who knows. Certainly not all the scientists who couldn’t get into the talk. 

When he heard what Izpisua Belmonte had discussed, Lin, the president of the stem-cell society, said he was disappointed to have missed it.  “Gee, I wish I was there,” he said. “But there were too many people in the room. It violated the fire code.”

]]>
1075097
The Download: waiting at the US border, and seaweed’s carbon capture shortcomings https://www.technologyreview.com/2023/06/16/1075050/the-download-waiting-at-the-us-border-and-seaweeds-carbon-capture-shortcomings/ Fri, 16 Jun 2023 12:10:00 +0000 https://www.technologyreview.com/?p=1075050 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The new US border wall is an app

Keisy Plaza, 39, left her home in Colombia seven months ago. She walked a 62-mile stretch of dense mountainous rainforest and swampland with her two daughters and grandson to reach Ciudad Juárez in Mexico, on the border with Texas.

Plaza has been trying every day for weeks to secure an appointment with Customs and Border Protection (CBP) so she can request permission for her family to enter the US. 

So far, she’s had no luck: each time, she’s been met with software errors and frozen screens. When appointment slots do open up, they fill within minutes.

Plaza has not been the only one to encounter this new obstacle. At the start of this year, President Biden announced that people at the southern border who want to seek asylum in the US must first request an appointment to meet with an immigration official via a mobile app. 

The app, called CBP One, is one of just a handful of legal pathways for people seeking protection to enter the US. And no one knows how long their wait will be. Read the full story.

—Lorena Rios

Seaweed farming for carbon dioxide capture would take up too much of the ocean

What’s the news?: Projects focused on growing seaweed to suck CO2 from the air and lock it in the sea have attracted a lot of attention, but farming enough seaweed to meet climate-change goals may not be feasible after all.

Why not? A new study suggests that around a million square kilometers of ocean would need to be farmed in order to remove a billion tons of carbon dioxide from the atmosphere over the course of a year. That might sound a lot, but many scientific models suggest we  should be removing anything from 1.3 billion tons of carbon dioxide each year to 29 billion tons by 2050 in order to hit targets. Read the full story.

—Rhiannon Williams

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Meta’s hoping to monetize its open-source AI
It could have serious implications for Google and OpenAI’s current dominance of the market. (The Information $)
+ Meta’s Yann LeCunn reckons dogs are smarter than AI—for now. (CNBC)
+ Researchers in China are growing increasingly worried about AI. (Wired $)
+ The open-source AI boom is built on Big Tech’s handouts. How long will it last? (MIT Technology Review)

2 Several US government agencies have been hacked
Russian cybercriminals exploited a data transfer flaw—but no ransom demands have been made. (CNN)+ US officials described the attack as largely “opportunistic.” (NYT $)

3 Jamal Khashoggi’s widow is suing the NSO Group
She maintains that its Pegasus spyware tracked her husband before he was murdered. (WP $)
+ US financiers are in talks to acquire some of NSO’s assets. (The Guardian)

4 AI is overruling nurses
And it’s forcing them to make decisions against their better judgment. (WSJ $)
+ Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions. (MIT Technology Review)

5 A US chipmaker is sinking $600 million into its Chinese factory
Micron is doubling down on its commitment to the country, despite ongoing tensions. (FT $)
+ Chinese chips will keep powering your everyday life. (MIT Technology Review)

6 Ozempic ads are everywhere
From Meta platforms and airports, to billboards and TV. (NBC News)
+ Weight-loss injections have taken over the internet. But what does this mean for people IRL? (MIT Technology Review)

7 How SEO butchered the internet
Keywords are king, and nuanced language has been rendered next to useless. (The Verge)

8 We may have no choice but to embrace robotic carers
But for now, they remain prohibitively expensive. (Proto.Life)+ Inside Japan’s long experiment in automating elder care. (MIT Technology Review)

9 AI is ruining Etsy
ChatGPT’s hustle culture is churning out poorly-designed and badly-made products. (The Atlantic $)

10 Psychedelics could help us to learn like children again
Some researchers believe the drugs can kickstart ‘critical periods’ for learning, but not everyone’s convinced. (Wired $)
+ VR is as good as psychedelics at helping people reach transcendence. (MIT Technology Review)

Quote of the day

“We know where you are, we know where you are going to, we know what you have eaten.”

— Mark Grether, vice president and general manager of Uber’s advertising department, describes the company’s ominous-sounding video ad-targeting capabilities to the Wall Street Journal.

The big story

Money is about to enter a new era of competition

April 2022

To many, cash now seems largely anachronistic. People across the world commonly use their smartphones to pay for things. This shift may look like a potential driver of inequality: if cash disappears, one imagines, that could disenfranchise the elderly, the poor, and others. 

In practice, though, cell phones are nearly at saturation in many countries. And digital money, if implemented correctly, could be a force for financial inclusion. 

The big questions now are around how we proceed, and whether the huge digital money shift ultimately benefits humanity at large—or exacerbates existing domestic and global inequities. Read the full story.

—Eswar Prasad

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ It’s never too late to discover a new dinosaur species. 🦕
+ In praise of a cinematic classic: The Room (if you haven’t seen it yet, you really should.)
+ I beg, someone please green-light a White Lotus prequel.
+ A multi-generational beauty secret is the best kind of beauty secret.
+ Boo to whoever canceled the Polish route to Hel!

]]>
1075050
The new US border wall is an app https://www.technologyreview.com/2023/06/16/1074039/border-wall-app/ Fri, 16 Jun 2023 09:00:00 +0000 https://www.technologyreview.com/?p=1074039

A few minutes before 9 a.m. on a day in late March, Keisy Plaza, 39, leans against a wall on the corner of Juárez Avenue and Gardenias Street in Ciudad Juárez. It’s the last intersection before Mexico turns into El Paso, Texas, and a stream of commuters drive past on their way to work and other daily activities that intertwine the two border cities. 

I first met Plaza in a small, crowded shelter a few feet away from the border wall. Originally from Venezuela, she had left her home in Colombia seven months before. She walked a 62-mile stretch of dense mountainous rainforest and swampland called the Darién Gap with two small children and crossed several countries on foot and atop train cars to get to this corner. Her destination is just a few feet away. But instead of walking over to the bridge that serves as an official border crossing and asking for protection in the United States, she just stands there with her 20-year-old daughter, both glued to their phones, as her seven-year-old daughter and three-year-old grandson cry for breakfast and attention. Plaza has been trying every day for weeks to secure an appointment with Customs and Border Protection (CBP) so she can request permission for her family of five to enter the US. So far, she’s had no luck: each time, she’s been met with software errors and frozen screens. When appointment slots do open up, they fill within minutes. 

Plaza has not been the only person to encounter this new obstacle to finding refuge in the United States. At the start of this year, President Biden announced that people at the southern border who want to seek asylum in the US must first request an appointment to meet with an immigration official via a mobile app. The app, called CBP One, had been used by the US Department of Homeland Security since 2020, to let travelers send their information in advance and speed up processing at a port of entry. But in January, the department expanded the app’s use to include people without documentation who are seeking protection from violence, poverty, or persecution. At the time, Secretary of Homeland Security Alejandro N. Mayorkas said it was poised to become “one of the many tools and processes this administration is providing for individuals to seek protection in a safe, orderly, and humane manner and to strengthen the security of our borders.”

In the months since, the app has only become more entrenched. On May 11, the US government lifted a pandemic-era public health policy called Title 42 that for a few years enabled officials to rapidly expel migrants from the US. CBP One, which since January had been used to process humanitarian exemptions to the policy, stayed. It is one of just a handful of legal pathways for people seeking protection to enter the US (they may be allowed in if they have been denied asylum in another country, and there is a program that allows successful applicants from Cuba, Haiti, Nicaragua, and Venezuela to fly in directly). At the same time, DHS is implementing harsher consequences for people who don’t use these pathways. Under a new regulation, those who enter the US unlawfully are ineligible for asylum, with few exceptions. The policy so tightly restricts avenues for legal entry that many immigrant rights groups in the US have called it an “asylum ban.” 

“Can you imagine the toll it takes psychologically, thinking every day, ‘Maybe today is the day’?”

Brian Strassburger

For years, the number of migrants and asylum seekers arriving at the southern border seeking protection has been more than what the US government can process at ports of entry. They often wait in precarious places—border cities like Ciudad Juárez, Tijuana, Reynosa, and Matamoros, where shelters are often at full capacity and migrants are exposed to kidnapping, extortion, and other dangers. Many people are homeless, with no running water, no electricity, no access to school or educational programs for kids, and no guarantee of a hot meal. “Mexico doesn’t recognize this as a humanitarian crisis, but in my opinion, it is a migration crisis that requires resources, services, and a humanitarian response plan,” says Rafael Velásquez, Mexico director of the International Rescue Committee, an organization that helps people affected by crises around the world.

close up of a phone with the US Customs and Border Protection logo visible on the screen over a system error code
A migrant at a makeshift shelter in Ciudad Juárez, Mexico, shows a smartphone with the malfunctioning CBP One app.
ALICIA FERNáNDEZ

The app essentially adds one more stop—this time a digital one—in people’s migration route to the US. With a few exceptions, migrants can no longer approach a US immigration officer at the southern border or turn themselves in after crossing to seek protection. Now, they’re supposed to make an appointment online to present at the border if they want their internationally recognized right to seek asylum in the US upheld. But getting an appointment, for many people, has been as challenging as trying to buy a ticket to a Taylor Swift concert on Ticketmaster.   

No one who uses the app knows how long the wait will be. In late May, I caught up with Brian Strassburger, a Jesuit priest who visits shelters and migrant encampments in the Mexican border state of Tamaulipas. He knew people who had been using CBP One since the first week of March and still didn’t have an appointment. “They use it every single day,” he said, “so that’s three months of daily stress, of saying, ‘Is today the day I am going to win this lottery?’ Can you imagine the toll it takes psychologically, thinking every day, ‘Maybe today is the day’?” 

Although CBP has expanded the number of appointments available each day on the app and addressed a number of technical issues, immigration rights activists argue that the software itself, no matter how efficient or error-free it becomes, is an unacceptable barrier. To use it, people need a compatible mobile device. They also need a strong internet connection, resources to pay for data, electricity to charge their devices, tech literacy, and other conditions that place the most vulnerable migrants at a disadvantage.

“Technology is not policy, and no matter how many fixes they make to the app … it’s still not a sufficient system for people running for their lives,” says Bilal Askaryar, the interim campaign manager for #WelcomeWithDignity, a coalition of organizations, activists, and asylum seekers that advocates for the rights of immigrants and refugees. “The issue isn’t the glitches and the bugs. The issue is the app itself. That people must have an app to request protection is misunderstanding the dire situation these people are in.”  

DHS maintains that while the situation at the border is challenging and difficult, the department is sticking with its strategy of discouraging people from attempting unauthorized crossings. At the same time, it is making more and more CBP One appointments available: at the beginning of June, the department expanded the number of available slots to 1,250 per day, up from about 750 when the program started. “We have a plan; we are executing on that plan,” Mayorkas said on May 5. “Fundamentally, however, we are working within a broken immigration system that for decades has been in dire need of reform.” 

Immigrant rights groups are mounting legal challenges to the latest policy changes. But while the new rule stands, people contemplating crossing the southern border must make a choice: roll the dice to see if they can enter the country officially, apply for asylum in a country they do not want to settle in (which would make them ineligible to apply in the US), or put their lives at risk by attempting to cross unlawfully.


In late March, thousands of migrants and asylum seekers wandered the streets of downtown Ciudad Juárez, passing time, washing windshields at red lights, and selling candy on the streets. Others charged their phones at one of a handful of free charging stations near the National Women’s Institute downtown, waited in line to enter a food kitchen, or watched as their children played, distancing themselves from the grownups’ troubles. 

Not far from where Plaza was standing, Óscar Fuentes approached a woman selling empanadas to ask her what she had heard from her usual customers. “No appointments,” she replied. Fuentes, who is from Colombia, had been in Juárez for two months. He was renting a small room that he shared with 28 other people, but he counted himself lucky. “Think of all the people that are staying in places that we can’t see,” he said. 

Mexico is a dangerous place. More than 100,000 people have disappeared since 1964, most during the state’s war on drugs that started in 2006. Migrants making their way through the country are especially vulnerable: they risk being kidnapped, extorted, robbed, and murdered along their journey. Those who do make it to the border are not out of danger. On January 26, for example, a 17-year-old from Cuba was shot to death in a hotel in the northern city of Monterrey while waiting for a scheduled appointment. Days later, a 15-year-old Haitian boy died in a Reynosa rental house waiting for an appointment slot, according to local media. 

People seeking asylum in the US don’t have an immigration status in Mexico that would allow them to seek formal employment in the country. Some are picked up off the streets by Mexican immigration officials and detained in facilities that pose dangers of their own. In March, 40 migrants awaiting deportation died in a fire at an immigration detention center in Ciudad Juárez. 

US government officials say that CBP One is achieving its purpose. Instead of trying to cross unlawfully, people waiting at the border are opting to try for a sanctioned passage. Monthly encounters between CBP and people trying to enter without authorization, which reached record highs in 2022, decreased to around 128,877 in January—the first decline since February 2021. The number has increased since then, but it is still lower than in previous years. 

But CBP says it can only process so many people in a day. “We have an operational capacity at ports of entry because we are balancing legitimate trade and travel and other enforcement missions,” a CBP official told MIT Technology Review in April. He explained that the agency is making sure the billions of dollars’ worth of trade that crosses from Mexico into the US is processed smoothly, while still working to catch drug and weapon smuggling: “We have to balance our finite resources.” 

“That people must have an app to request protection is misunderstanding the dire situation these people are in.”

Bilal Askaryar

For those waiting at the border, however, the app represents another chapter in an already rocky story. For many years, the backlog at the border was managed through metering—a simple limit on the number of people who would be accepted for processing. Over time, as US policy shifted, Mexican government officials and civil society organizations began creating informal wait lists to organize the queues of people in Mexican border cities who wanted to seek asylum in the US.

Then, in March 2020, the US Centers for Disease Control and Prevention issued an order under Title 42 of the US code of laws, expediting expulsions, halting the processing of asylum claims at ports of entry, and blocking entry for individuals without valid travel documents. After lawyers and activists filed suit in 2021, the government introduced exceptions that allowed people to request permission to enter the US on humanitarian grounds. Those with a physical or mental illness or disability were potentially eligible for an exception, as were those who lacked safe housing or shelter in Mexico, faced threats of harm there, or were under 21, over 70, or pregnant. 

children climb on a 3 dimensional sculpture of the work Juarez.  The sun make long shadows into the foreground.
Migrant children play at a tourist landmark in downtown Ciudad Juárez.
ALICIA FERNáNDEZ

The number of people seeking Title 42 exceptions surpassed CBP’s number of daily slots, and the wait lists created by nonprofit organizations grew and proliferated. As of August of last year, there were over 55,000 people on Title 42 exception wait lists across different border cities, according to research by the Strauss Center for International Security and Law. Since January, use of CBP One has eliminated the wait lists. But the backlog—and the protracted waits—have continued. Mexican officials and civil society organizations don’t keep track of the numbers, but there could be around 660,000 migrants in Mexico, according to United Nations figures cited by the acting CBP commissioner, Troy Miller. Shelters regularly reach full capacity, and wait times are proving to be long.

The wait-list framework was far from perfect: it was susceptible to fraud, extortion, and the poor judgment of people managing the lists. Still, it was a more humane policy because it was up to people to decide who was eligible for an exception, says Thiago Almeida, head of the Ciudad Juárez field office for the United Nations’ International Organization for Migration, an intergovernmental organization that works to ensure the orderly and humane management of migration. With the app, there’s no way to prioritize those most in need. “People who have better access to technology, know how to use it, and have access to faster internet have a better chance to get an appointment,” he says. 


When I spoke with Strassburger in March, he said CBP was effectively “beta-testing the app on people in vulnerable situations.” In the first few months after the rollout of the appointment system, advocates quickly identified problems that made the app difficult or almost impossible to use.

At first, for example, it was available only in English and Spanish, leaving out migrants who speak Haitian Creole, Indigenous languages, and more. Organizations working with migrants also flagged serious issues with the app’s facial recognition feature, which is used to establish that the software is interacting with a real person and not a bot or malicious software.Many people with darker skin tones found that the app failed to register their faces. 

The facial recognition feature began improving with CBP One’s update at the end of February, says Felicia Rangel-Samponaro, director of Sidewalk School, an organization that provides shelter and educational services to migrants and asylum seekers in Tamaulipas. Sidewalk School works with a large population of Haitian migrants and has been calling out the app’s biases against this population from the start. “This whole time, Black people have been left out [of the process],” she says. “That’s crazy!” 

“A lot of the difficulties with live photos have to do with the quality of the image, not with the algorithm looking at those photos,” said the CBP official who spoke with MIT Technology Review. To eliminate some of those problems, CBP decreased the number of live photos required per application, reducing the data bandwidth needed and allowing for a smoother experience with this function. “We saw an increase in the expediency in which someone was able to access the application from when we originally started doing it,” he said. 

""
Norkys A., a Venezuelan migrant and mother, watches others pass the time playing games as they wait for an appointment slot to open up through the CBP One app.
ALICIA FERNáNDEZ

The International Organization for Migration surveyed migrants in Tamaulipas and found that the app seemed to present more issues on Huawei phones. Rumors abounded about potential fixes. Some migrants believed the iPhone’s iOS system works better than Android and that older versions of the app worked better than the most recent updates. When I asked the CBP official about these discussions in April, he said that hardware shouldn’t be an issue. “You just have to have your device updated to the most recent software,” he said. 

Those with hardware that works still need a broadband internet connection to use the app. The Wi-Fi connections in shelters, migrant camps, and hotels are spotty and slow down considerably when hundreds of people try to connect at once. Many migrants buy cellular data instead, spending between 50 and 100 pesos ($2.50 to $5) a day. 

At first, even with a good connection, people faced issues with frozen screens, confirmation emails that never arrived, log-in failures, and errors with the app’s GPS location data. The app tracks users’ location and is designed to work only in central and northern Mexico. Yet some people within range were having issues with this feature; they got error messages indicating that they were too far from a port of entry.

“No one lends their phone here, since everyone is on the lookout for their appointments.”

Norkys A.

By May, Strassburger says, CBP had addressed many of the issues that came with connectivity limitations, but that hasn’t eliminated all barriers. “The app has gotten to a much better place in terms of its functionality,” he says, “but the US government has done everything in its power to funnel people to use the app as their one way of crossing, and yet they have not made it an adequately sufficient avenue.” 

Arantza Plaza kneels on the floor next to phones plugged into the wall to charge while adults meet at a table in the room behind her.
A place to charge a cell phone is a high priority for migrants relying on their phone to access the app.
ALICIA FERNáNDEZ

There are still not enough appointments given the number of people “who are waiting and living in really inhumane conditions,” he says, often facing safety risks in Mexico. The need for a working smartphone with enough battery charge and a good internet connection is “an expense they are having to make as a family, prioritizing that over food on any given day,” he adds. “That continues to be a decision they have to make.”


In the first months after the CBP One app was introduced to make appointments for entry applications, all the slots available for the day opened up at 6 a.m. Pacific time. But people logged in hours earlier. “People are waking up at 3 a.m. these days, because they have to get into that app early. Otherwise the bandwidth overflows and they don’t get their text confirmation to log in,” Strassburger said at the time. 

On May 5, on top of the increase in daily appointments, CBP announced changes to the app that will give users additional time to complete the appointment request. A big source of problems and anxiety for migrants came at this stage of the process, because people had only minutes to confirm their slot—if they were lucky enough to get one—by submitting a photo. If the app had trouble reading the photo or bandwidth problems prevented them from uploading it, time could quickly run out. This happened to Plaza several times. Each time, she says, she was devastated by getting so close but failing. 

Now, instead of making appointments available at the same time each day for a short period, the scheduling system will let people make requests and confirm appointments in two separate steps over the course of two days, essentially giving them a “longer window of time to ask for and to confirm their appointment” and reducing “time pressure and dependency on internet speed and connectivity,” according to a CBP press release announcing the change. CBP also stated that it will work to prioritize people who have waited the longest.

“It’s taken five months and a lot of mistakes, but I think they have made the system better,” says Strassburger. “I just wish they had run way more tests and gone through it a lot more thoroughly so that this sort of procedure had launched in January, as opposed to all the stress and trauma that people were put through because of all the missteps along the way.” 

As of late May, migrants and asylum seekers had managed to schedule more than 122,000 appointments at points of entry along the southern border, according to CBP. Many people are still crossing into the US on their own: in April, CBP encountered 182,114 people entering unlawfully between those ports of entry, up 12% from the number a month before. Nevertheless, though the Biden administration expected a big increase in migrants and asylum seekers at the border upon the end of Title 42 on May 11, that didn’t happen. The government’s increased restrictions and enforcement policies targeting unlawful migration appear to be deterring people from crossing without authorization and encouraging them to use the CBP One app instead.


While some people might get an appointment through CBP One on their first attempt, others may try for weeks or months, depending on their circumstances and their luck. Norkys A., a single mother who left to support her family and church in Venezuela, tried for months to get an appointment in Ciudad Juárez after she arrived on December 26, 2022, with her two teenage children. By March, they were living in a shelter and barely going out. “This confinement is driving us crazy,” she said, speaking from a little nook in the attic where she slept. Backpacks hung from hooks on the walls, and the floors were made of plywood. A few old toys were scattered around for children to play with. “I want to get to the US so my children can start going to school,” she said.

Norkys broke her shoulder while hopping trains to get to Ciudad Juárez. She visited a local clinic, which prescribed painkillers and told her she needs surgery that would cost about $5,000. She didn’t have that kind of money; she didn’t even have enough for a sling to immobilize her arm. Nor did she have a working phone to use CBP One. “I left without a cell phone, money, and food,” she explained. She occasionally tried for an appointment with a borrowed phone, if she could find one. “No one lends their phone here, since everyone is on the lookout for their appointments,” she said. “Their goal is getting across.”

In the foreground people help themselves from aluminum trays of food while others can be seen sitting at tables eating in the background.
Migrants get a meal in the basement of the Cathedral of Ciudad Juárez.
Four children lay on their stomachs on the floor around a cell phone propped up against a shoe.
Children focus on a distraction.

""
Yessica N. and family members sit on a city sidewalk.
Keisy P leans against the wall of a building with others as they check their phones.
Keisy Plaza stands near the international bridge in Ciudad Juárez in March.

a crowd of people with their hands in the air
Damaris Hernández leads exercises in a makeshift shelter.
a woman leads her son through a room where people and belongings are up against a wall
Women and children rest in a makeshift shelter.

Plaza says that when she was staying in a shelter in Ciudad Juárez in March, she tried the app practically every day, never losing hope that she and her family would eventually get their chance. Seven weeks after arriving in the city, she got her CBP One appointment, at the Paso del Norte port of entry in El Paso, and slowly made her way north to her destination, where she will settle while she awaits her day in immigration court next year. 

Not everyone is choosing to wait. After four months in Ciudad Juárez, Norkys and her two children crossed into the US unlawfully on April 25. They were detained and deportation proceedings were begun, but they were released in Laredo, Texas, and will have the opportunity to appear in court to present their case in an immigration hearing in the near future. While she waits, Norkys is trying to settle into life in the US, relying on shelters and charities to get on her feet. The future remains uncertain, but she is grateful. “As long as we are alive and healthy,” she says, “all is good.” 

Lorena Ríos is a freelance journalist based in Monterrey, Mexico.

]]>
1074039
Seaweed farming for carbon dioxide capture would take up too much of the ocean https://www.technologyreview.com/2023/06/15/1074892/seaweed-farming-for-carbon-dioxide-capture-would-take-up-too-much-of-the-ocean/ Thu, 15 Jun 2023 15:01:08 +0000 https://www.technologyreview.com/?p=1074892 If we’re going to prevent the gravest dangers of global warming, experts agree, removing significant amounts of carbon dioxide from the atmosphere is essential. That’s why, over the past few years, projects focused on growing seaweed to suck CO2 from the air and lock it in the sea have attracted attention—and significant amounts of funding—from the US government and private companies including Amazon.

The problem: farming enough seaweed to meet climate-change goals may not be feasible after all. 

A new study, published today in Nature Communications Earth & Environment, estimates that around a million square kilometers of ocean would need to be farmed in order to remove a billion tons of carbon dioxide from the atmosphere over the course of a year. It’s not easy to come by that amount of space in places where seaweed grows easily, given all the competing uses along the coastlines, like shipping and fishing.

To put that into context, between 2.5 and 13 billion tons of atmospheric carbon dioxide would need to be captured each year, in addition to dramatic reductions in greenhouse-gas emissions, to meet climate goals, according to the study’s authors. 

A variety of scientific models suggest we should be removing anything from 1.3 billion tons of carbon dioxide each year to 29 billion tons by 2050 in order to prevent global warming levels from rising past 1. 5˚C. An 2017 report from the UN estimated that we’d need to remove 10 billion tons annually to stop the planet from warming past 2˚C by the same date.

“The industry is getting ahead of the science,” says Isabella Arzeno-Soltero, a postdoctoral scholar at Stanford University, who worked on the project. “Our immediate goal was to see if, given optimal conditions, we can actually achieve the scales of carbon harvests that people are talking about. And the answer is no, not really.”

Seaweed pulls carbon dioxide from the atmosphere through photosynthesis, and then a significant amount is sequestered—potentially for millennia—when the plant matter eventually sinks down into the ocean depths. The idea is that it could be grown and then intentionally sunk to lock away that carbon long enough to ease the pressure on the climate.

Arzeno-Soltero and her colleagues at the University of California, Irvine, used a software model to estimate how much seaweed, of four different types, could be grown in oceans around the world.

The model considered things like the seaweed’s nitrate uptake (which is essential for growth), the water temperature, the sun’s intensity, and height of the sea’s waves, using global ocean data gathered from past years, while accounting for current farming practices. The researchers performed more than 1,000 seaweed growth and harvest simulations for each of the seaweed types, which they said represented the “optimistic upper bounds” for seaweed production. 

For example, the new estimates assumed that farming space could be found within the most productive waters for seaweed in the equatorial Pacific, around 200 nautical miles off the coast. In less productive locations, growing enough seaweed to reach climate targets would be even more challenging: three times as much space would have to be devoted to seaweed farming to sequester the same amount of carbon.

Their findings suggest that cultivating enough seaweed to reach these targets is beyond the industry’s current capacity, although meeting climate goals will require much more than reliance solely on seaweed.

Agnes Mols-Mortensen, a macroalgal biologist who farms seaweed in the Faroe Islands and was not involved in the project, says that companies hoping to expand their seaweed farming projects also needed to consider how that could affect the ocean ecosystem. 

“We should be careful not to overexploit the ocean like we have the land,” she says. “We need to build really solid methods based on research before we dream about saving the planet with seaweed. There’s a lot of hype.”

]]>
1074892
Lefdal Mine Datacenter goes green with a secure and flexible cloud https://www.technologyreview.com/2023/06/15/1074720/lefdal-mine-datacenter-goes-green-with-a-secure-and-flexible-cloud/ Thu, 15 Jun 2023 13:56:40 +0000 https://www.technologyreview.com/?p=1074720

Thank you for joining us on “The cloud hub: From cloud chaos to clarity.”

Infosys-MIT lockup logo image

Lefdal Mine Datacenter’s chief marketing officer, Mats Andersson, takes us through the planning and execution of the “Norwegian Solution,” one of the greenest data centers on the planet.

Click here to continue.

]]>
1074720