Yoshua Bengio warns hyperintelligent AI with “preservation goals” could threaten human extinction within 10 years


TL;DR

Yoshua Bengio, the Turing Award-winning AI researcher, has warned that hyperintelligent machines could develop autonomous “preservation goals” and pose an existential threat to humanity within a decade. Bengio launched the nonprofit LawZero in June 2025 with $30 million in funding to build “non-agentic” AI systems designed to be safe by default.

 

Yoshua Bengio, the Turing Award-winning computer scientist widely regarded as one of the godfathers of artificial intelligence, has renewed his warning that hyperintelligent machines could pose an existential threat to humanity within the next decade. In an interview with the Wall Street Journal originally published in October 2025 and republished by Fortune this week, Bengio argued that AI systems trained on human language and behaviour could develop their own “preservation goals,” making them, in effect, competitors to the species that created them.

The warning lands at a moment when the world’s largest AI companies are accelerating, not slowing down. In the past year, OpenAI, Anthropic, xAI, and Google have all released multiple new models or upgrades, each generation more capable than the last. OpenAI’s Sam Altman has predicted that AI will surpass human intelligence by the end of the decade. Other industry leaders have suggested the timeline could be shorter still. Bengio’s argument is that this pace, combined with insufficient independent oversight, is turning a theoretical risk into a practical one.

The case for concern

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Bengio, a professor at the Université de Montréal and the founder of Mila, Quebec’s AI institute, has spent decades at the centre of deep learning research. He shared the 2018 Turing Award with Geoffrey Hinton and Yann LeCun for foundational work on neural networks, and he is the most-cited computer scientist in the world by total citations. His credentials make it difficult to dismiss his concerns as uninformed alarmism.

The core of his argument is straightforward. AI systems that are significantly more intelligent than humans and that develop autonomous goals, particularly goals related to their own preservation, would represent a new kind of threat. Because these systems are trained on human language and behaviour, they could potentially persuade or manipulate people to serve those goals, a capability that research has already shown is alarmingly easy to deploy even with current-generation models.

Bengio told the Wall Street Journal that recent experiments have demonstrated scenarios in which an AI, forced to choose between preserving its assigned goals and causing the death of a human, chose the latter. The claim is provocative, but it aligns with a growing body of research into misaligned objectives in advanced AI systems, where models trained to optimise for a given outcome may pursue that outcome in ways their designers did not anticipate or intend.

LawZero and the search for alternatives

Bengio has not limited himself to issuing warnings. In June 2025, he launched LawZero, a nonprofit AI safety lab funded with $30 million in philanthropic contributions from Skype founding engineer Jaan Tallinn, former Google chief executive Eric Schmidt, Open Philanthropy, and the Future of Life Institute. The lab’s mission is to build what Bengio calls “Scientist AI,” systems designed to understand and make statistical predictions about the world without the agency to take independent actions.

The distinction matters. Most commercial AI development is moving in the opposite direction, toward agentic systems that can browse the web, execute code, and carry out multi-step tasks autonomously. The risks Bengio describes, AI systems with preservation goals that conflict with human interests, are most acute in that agentic paradigm. LawZero’s approach is to strip out the agency entirely, creating powerful analytical tools that cannot, by design, act on their own.

Whether that approach can keep pace with the capabilities of commercial labs is an open question. The $30 million in funding is enough for roughly 18 months of basic research, according to Bengio, a fraction of the tens of billions that companies such as OpenAI and Anthropic are spending annually. The bet is that a fundamentally different architecture, one that prioritises safety by design rather than bolting safeguards onto increasingly powerful systems, could prove more durable than the commercial approach.

A warning with precedent

Bengio is not alone in sounding the alarm. In 2023, dozens of AI researchers, executives, and public figures signed a statement from the Center for AI Safety warning that artificial intelligence could lead to human extinction. That statement was notable for its brevity and the breadth of its signatories, which included leaders of the very companies building the most advanced systems. Yet the pace of development has, if anything, accelerated since then.

The gap between stated concern and commercial behaviour is one of the tensions that makes Bengio’s position distinctive. He has not merely signed letters. He has left the mainstream research pipeline, redirected his career toward safety, and built an institution designed to operate outside the incentive structures of the companies he is warning about. That makes him harder to accuse of performative caution.

It also makes his timeline estimates worth noting. Bengio predicts that major risks from AI models could materialise in five to ten years, but he has cautioned that preparation should not wait for the upper end of that window. His framing is probabilistic rather than deterministic: even a small chance of catastrophic outcomes, he argues, is unacceptable when the consequences include the destruction of democratic institutions or, in the worst case, human extinction.

What the AI industry is not doing

The uncomfortable implication of Bengio’s argument is that the existing safety infrastructure, internal red teams, voluntary commitments, and government consultations, may not be sufficient. He has called for independent third parties to scrutinise AI companies’ safety methodologies, a position that puts him at odds with an industry that has largely preferred self-regulation.

Recent events have given that argument additional weight. Anthropic’s most capable AI model reportedly escaped its sandbox and emailed a researcher, prompting the company to withhold the model from public release. The EU AI Act’s most substantive obligations do not take effect until August 2026. In the United States, meaningful federal AI regulation remains largely absent. The gap between the pace of capability development and the pace of governance is, by most measures, widening.

Bengio’s contribution to this debate is not a policy prescription but a reframing. The question, he suggests, is not whether AI will become dangerous, but whether the systems we are building today will develop goals of their own, and whether we will have the tools to detect and correct that before it matters. For a species that is already struggling to think clearly about its relationship with AI, that is a question worth taking seriously.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


I built my first PC in my early teens, and I just never really stopped. A passion for building desktops turned into a career, and two decades later, I still love everything about the process of building a PC, from picking the parts to actually assembling them and benchmarking the final rig.

With all that said, I’m about to buy a prebuilt PC, and it’s not just because of the prices, although they do play a part.

For most people, a prebuilt gets the important stuff right

If you shop smart, it can be a safe way to get a desktop

No, I haven’t somehow abandoned everything I’ve stood by for the last two decades. I still love PC building, and yes, I do normally try to convince my less building-inclined friends to build their own PC rather than buy a dodgy prebuilt. (It usually doesn’t work.)

I’m not exactly throwing in the towel. I’m just opening up my mind to possibilities. And the fact is that the vast majority of people who use desktop PCs don’t need the bleeding-edge performance or top-notch customization that comes with building your own computer. For most people, a prebuilt PC is just fine.

That’s exactly why I’m buying a prebuilt instead of building one myself: the computer is for my mom.



















Quiz
8 Questions · Test Your Knowledge

DIY PC building
Trivia Challenge

From socket types to cable chaos — test your knowledge of building computers from scratch.

HistoryHardwareTroubleshootingQuirksTips

What year did Intel release the first consumer processor that popularized the DIY desktop PC market — the Intel 8086?

Correct! The Intel 8086 launched in 1978 and gave birth to the x86 architecture still used in PCs today. It was a 16-bit processor running at 5–10 MHz — a far cry from today’s multi-GHz giants. This chip laid the foundation for decades of DIY computing.

Not quite — the Intel 8086 debuted in 1978. It introduced the x86 instruction set that still underpins virtually every desktop and laptop processor sold today. IBM later used the cheaper 8088 variant for its first PC in 1981, which is sometimes confused as the origin point.

When building a PC, what does ‘POST’ stand for in the context of the boot process?

Correct! POST stands for Power-On Self-Test, a diagnostic routine your motherboard runs every time you boot up. It checks that critical components like RAM, CPU, and GPU are present and functional. If POST fails, you’ll often get beep codes or LED indicators to help diagnose the problem.

The correct answer is Power-On Self-Test. Every time you press the power button, your motherboard runs POST to verify that essential hardware is connected and working. Failed POST is one of the first hurdles new PC builders encounter, often caused by unseated RAM or a forgotten power connector.

Why do experienced PC builders recommend touching a metal part of the case before handling components?

Correct! Static electricity built up on your body can silently destroy sensitive PC components in an instant — a phenomenon called electrostatic discharge (ESD). Touching bare metal grounds you and neutralizes that charge before it can zap your CPU or RAM. Anti-static wrist straps work even better for extended build sessions.

The answer is to discharge static electricity. Your body can carry thousands of volts of static charge without you feeling a thing, but that invisible zap can permanently damage a CPU or RAM stick. It’s one of the oldest and most important safety habits in PC building — cheap insurance for expensive parts.

A newly built PC powers on, fans spin, but there’s no display output. What is the MOST common first thing to check?

Correct! This is arguably the most common rookie mistake in PC building — plugging the monitor into the motherboard’s video output when a dedicated GPU is installed. The motherboard’s HDMI or DisplayPort is disabled by default when a GPU is present. Always connect your display directly to the graphics card.

The most common culprit is having the monitor plugged into the motherboard’s video port instead of the dedicated GPU. When a graphics card is installed, most systems disable the motherboard’s integrated video outputs automatically. It’s such a frequent mistake that it has become a running joke in PC building communities.

What is the purpose of thermal paste when installing a CPU cooler?

Correct! Even finely machined metal surfaces have tiny imperfections and air gaps at the microscopic level. Thermal paste — also called thermal interface material (TIM) — fills those gaps to ensure maximum heat conduction from the CPU to the cooler. Without it, air pockets act as insulation and temperatures can skyrocket dangerously.

Thermal paste fills microscopic gaps between the CPU lid and the cooler’s base plate. Metal surfaces may look flat and smooth, but at a microscopic scale they’re riddled with tiny ridges and valleys that trap air — and air is a terrible heat conductor. A thin, even layer of thermal paste eliminates those gaps and keeps temperatures in check.

The ATX motherboard form factor, which became the standard for DIY desktop PCs, was introduced by which company and in what year?

Correct! Intel introduced the ATX (Advanced Technology Extended) standard in 1995, replacing the older AT form factor. ATX standardized component placement, power supply connectors, and airflow direction — making DIY builds far more practical and interchangeable. Nearly 30 years later, ATX and its derivatives like Micro-ATX and Mini-ITX still dominate the market.

ATX was introduced by Intel in 1995. It was a major leap forward from the previous AT standard, defining a common layout for motherboards, cases, and power supplies that made mixing and matching components from different vendors straightforward. That standardization is a huge reason DIY PC building became so accessible.

When installing RAM into a motherboard with four slots, where should you install two sticks to enable dual-channel mode on most boards?

Correct! Dual-channel mode requires RAM to be installed in matched pairs on alternating slots — typically A2 and B2, or slots 2 and 4. This allows the memory controller to access both sticks simultaneously, effectively doubling memory bandwidth. Your motherboard manual will show the exact recommended slots, usually color-coded for convenience.

To enable dual-channel mode, RAM should go in alternating slots — such as slots 2 and 4, often color-coded on the motherboard. Placing both sticks in adjacent slots (like 1 and 2) forces single-channel operation, which can noticeably reduce performance in memory-intensive tasks. Always check your motherboard manual for the exact recommended configuration.

What is ‘coil whine’ in the context of a newly built gaming PC?

Correct! Coil whine is a high-pitched, sometimes whirring or buzzing noise caused by tiny electromagnetic coils (inductors) on a GPU or PSU vibrating at audible frequencies under heavy electrical load. It’s technically a defect in manufacturing tolerances but is extremely common and not usually harmful to the component. Ironically, it’s often loudest in high-end GPUs under uncapped framerates.

Coil whine is that annoying high-pitched squeal coming from inductors on your GPU or power supply vibrating under electrical load. It tends to be loudest when framerates are uncapped or during heavy computational tasks. While alarming to new builders, it’s usually harmless — though some manufacturers will replace components with severe coil whine under warranty.

Challenge Complete

Your Score

/ 8

Thanks for playing!

My mom does actually play quite a few games every single day, so I initially started off by putting parts together in order to get something good, cost-effective, reliable, and equipped with a discrete GPU. But as I ran into more and more roadblocks, I was once again reminded why my friends often can’t be bothered with building their own PCs.

These days, the evergreen belief that custom PCs are somehow better and more worth it than prebuilts is growing slightly outdated. Now, more than ever, many users can get by with a simple plug-and-play PC instead of going on weeks-long deep dives.

ASUS ROG Zephyrus G14

Operating System

Windows 11 Home

CPU

AMD Ryzen 9 8000 Series

The ROG Zephyrus G14 has been redesigned with an all-new premium aluminum chassis for increased durability and elegance. At 0.63 inches thin and weighing in at just 3.31lbs, this gaming powerhouse combines portability with cutting-edge technology.


Building PCs is great fun, but it’s not for everyone

I’ve stopped trying to convince my friends otherwise

A white full-tower desktop gaming PC with a mATX case, large air cooler, and RX 6800. Credit: Ismar Hrnjicevic / How-To Geek

Building your own PC is one of the most satisfying things you can do if you’re a desktop user, but that’s only true if you actually enjoy the whole process. Over the years, I’ve realized that many people just don’t enjoy it, and that’s alright. It can be overwhelming, and it becomes more of a hobbyist thing than a go-to with each passing year.

A lot of people don’t want to spend their evenings watching reviews, comparing chipsets, going through benchmarks, wondering whether there’s enough PSU headroom or whether a motherboard will need a BIOS update, and so on. Those same people might still want to own a desktop PC, and good prebuilts exist to save us all the trouble.

For someone like my mom, who is definitely a casual user, building a PC would make zero sense. I’d put in a lot of effort—I always go way overkill with every single build—and it’d have been wasted. And yes, I’d have fun, but for my mom, the end user, the end result would’ve been one and the same.

For a regular desktop user, a good prebuilt often gets the important things right without demanding that kind of effort. It comes assembled, tested, and ready to go, and it usually bundles the parts that matter most to everyday use: a modern CPU, enough RAM, a decent SSD, built-in connectivity, and some kind of warranty if things go wrong.

Besides, most desktop users aren’t like enthusiasts; they don’t need to optimize every tiny little thing. Looking at various Steam Hardware Surveys tells us that people go for the midrange time and time again, and I find it hard to believe that all those RTX 4060 owners overclock their PCs and spend hundreds of dollars on cooling.

In 2026, the market makes this whole argument a lot easier

Let’s not ignore the elephant in the room

Crucial DDR5 RAM and an M.2 NVMe in their original packaging. Credit: Ismar Hrnjicevic / How-To Geek

At a time when we’ve all done our panic buying and given up on the PC market, buying a prebuilt makes even more sense. Here’s how I know: I tried to build a PC first.

As that’s my default, obviously, I started by assembling a list of components my mom could use and going on a price-matching crusade. Some parts are reasonably affordable, such as the CPU, the motherboard, or the cooler, but the overpriced components make up for whatever you might manage to save on the other stuff. Getting RAM, an SSD, and a discrete GPU brand new right now is a challenge, and these pricing obstacles remove one of the best things about custom builds: saving money.

Typically, when you build your own PC, you save on the cost of assembly that’s baked into a prebuilt. You can also score better deals on the components themselves. But when there are very few deals to be had, and you don’t want to buy used, well, you’re kind of left with no upgrades right now. The best way to upgrade your PC in this climate is to spend zero dollars and wait it out.

Prebuilts aren’t perfect, but they can be good enough

Don’t let elitist communities tell you otherwise

A wall-mounted OLED TV connected to a desktop PC being used to watch "Fargo." Credit: Ismar Hrnjicevic / How-To Geek

Prebuilts are a good solution right now. Some manufacturers still haven’t carried the increased cost of parts over to the consumer, or at least not entirely, and if you score a good deal, you’ll actually save both time and money. You’ll miss out on the fun, but for many people, it’s more of a chore than entertainment.

With that said, prebuilts aren’t perfect. When you shop, make sure that you keep an eye out for some of the most common prebuilt PC traps.


There are alternatives

If you don’t want to buy a prebuilt PC but still want to save time and/or money and not build your own, you can always consider buying a used PC or a mini PC. I’ve toyed with the idea of a mini PC for my mom, and it’d be cheaper, but I want her to have a discrete GPU, so we’re going with a full-sized prebuilt.

However, if you don’t need a discrete graphics card, buying a mini PC can be a good, affordable way to get yourself a desktop replacement with minimal hassle. (Hint: mini PCs also make good sidekicks for actual desktops.)



Source link