AI is making us faster, more productive, and worse at thinking



AI is everywhere, the pressure to adopt it is relentless, and the evidence that it’s making us smarter is getting thinner by the quarter.

On New Year’s Day 2026, a programmer named Steve Yegge launched an open-source platform called Gas Town. It lets users orchestrate swarms of AI coding agents simultaneously, assembling software at speeds no single human could match.

One of the first people to try it described the experience in terms that had nothing to do with productivity. “There’s really too much going on for you to comprehend reasonably,” he wrote. “I had a palpable sense of stress watching it.”

That sentence should be pinned to the wall of every executive suite, every venture capital boardroom, and every CES keynote stage where the word “intelligence” is thrown around like confetti. Because something strange is happening in the relationship between humans and the technology we keep calling intelligent.

The machines are getting faster. The humans interacting with them are getting more exhausted, more anxious, and, by several measures, less capable of the one thing intelligence was supposed to enhance: thinking clearly.

The pressure to adopt AI is now so pervasive that it has developed its own vocabulary of coercion.

You need to have AI.

You need to use AI.

You need to buy AI.

Your competitors are already using it.

Your children will fall behind without it.

The language does not come from engineers quietly solving problems. It comes from earnings calls, product launches, and LinkedIn posts written with the manic energy of people who have confused selling a product with describing reality.

In January 2026, at the World Economic Forum in Davos, Microsoft CEO Satya Nadella offered a phrase so revealing it deserves to be studied as a cultural artefact. He warned that AI risked losing its “social permission” to consume vast quantities of energy unless it started delivering tangible benefits to people’s lives.

The framing was striking: not a question of whether the technology works, but of whether the public can be kept on board while the industry figures out if it does. Nadella called AI a “cognitive amplifier,” offering “access to infinite minds.”

A month later, a Circana survey of US consumers found that 35 per cent of them did not want AI on their devices at all. The top reason was not confusion or technophobia. It was simpler than that. They said they did not need it.

The gap between the rhetoric and the evidence has become difficult to ignore. In March 2026, Goldman Sachs published an analysis of fourth-quarter earnings data and found, in the words of senior economist Ronnie Walker, “no meaningful relationship between productivity and AI adoption at the economy-wide level.”

The bank noted that a record 70 per cent of S&P 500 management teams had discussed AI on their earnings calls. Only 10 per cent had quantified its impact on specific use cases. One per cent had quantified its impact on earnings. Meanwhile, the five largest US technology companies were collectively expected to spend $667 billion on AI infrastructure in 2026, a 62 per cent increase over the previous year.

The National Bureau of Economic Research described the situation as a “productivity paradox”: perceived gains larger than measured ones.

There are real productivity improvements, but they are strikingly narrow. Goldman found a median gain of around 30 per cent in two specific areas: customer support and software development. Outside those domains, the evidence for broad improvement was, in the bank’s assessment, essentially absent. The promised revolution, for now, is happening in two rooms of a very large house.

What is happening in those rooms, though, is worth examining closely, because even where AI delivers, something else appears to be breaking.

In February 2026, researchers at UC Berkeley’s Haas School of Business published findings from an eight-month study embedded at a 200-person US technology firm. They found that AI did not reduce workloads. It intensified them. Tasks got faster, so expectations rose. Expectations rose, so the scope expanded. Scope expanded, so workers took on responsibilities that had previously belonged to other roles. Product managers began writing code. Researchers took on engineering work. Role boundaries dissolved because the tools made it feel possible, and then the exhaustion arrived.

I got tired just write it.

The researchers identified a cycle they called “workload creep”: a gradual accumulation of tasks that goes unnoticed until cognitive fatigue degrades the quality of every decision.

Harvard Business Review gave the phenomenon a blunter name: “AI brain fry.” A Boston Consulting Group study of nearly 1,500 US workers found that 14 per cent of those using AI tools requiring significant oversight reported experiencing it, a distinct form of mental fog characterised by difficulty focusing, slower decision-making, and headaches after extended AI interaction.

The workers most affected were not the sceptics or the laggards. They were the enthusiastic adopters, the ones who had done exactly what every keynote told them to do.

The distribution of this exhaustion is not random. Sixty-two per cent of associates and 61 per cent of entry-level workers reported AI-related burnout, according to the Harvard Business Review study.

Among C-suite executives, the figure dropped to 38 per cent. The pattern is consistent with what anyone who has spent time in an organisation could have predicted: the people who make the strategic decisions about AI adoption are not the people who manage its outputs, clean up its errors, and switch between its tools eight hours a day.

All of this raises a question that the industry would prefer to skip over: what, exactly, do we mean when we use the word “intelligence”?

The term “artificial intelligence” was coined in 1956 at a workshop at Dartmouth College, and it has been doing a particular kind of ideological work ever since. By naming the field after a human quality, its founders made a move that was as much marketing as science. It invited us to see computation as cognition, pattern-matching as understanding, speed as wisdom.

Every time a product is described as “intelligent,” it borrows from the emotional weight of a word that, for most of human history, meant something like the capacity for judgement, reflection, and the ability to sit with uncertainty long enough to think clearly about it.

That is not what these systems do. What they do, often brilliantly, is statistical prediction at an extraordinary scale. They recognise patterns in data, generate plausible continuations of sequences, and optimise for objectives defined by their designers.

This is genuinely useful. It is not intelligence in the sense that any philosopher, psychologist, or, for that matter, any thoughtful person on the street would recognise. The slippage between the two meanings is not accidental. It is the engine of the entire commercial project.

Here is the deepest irony: in the rush to surround ourselves with artificial intelligence, we appear to be eroding the conditions under which actual human intelligence operates. Intelligence, the real kind, requires things that the AI economy is systematically destroying: uninterrupted attention, tolerance for ambiguity, the willingness to sit with a problem before reaching for a solution, and the cognitive space to doubt, reconsider, and change one’s mind.

Researchers at the London School of Economics argued in a February 2026 paper that the manufactured urgency around AI narrows the space for democratic deliberation itself, collapsing the future into a single inevitability and leaving no room for the slow, uncertain, distinctly human process of deciding together what we actually want.

There is something almost comic about the situation.

We have built machines that can process language, generate images, and write code at superhuman speed, and the people using them are reporting mental fog, difficulty concentrating, and a growing inability to think.

A senior engineering manager cited in the BCG study described juggling multiple AI tools to weigh technical decisions, generate drafts, and summarise information. The constant switching and verification created what he called “mental clutter.” His effort had shifted from solving the core problem to managing the tools.

Not everyone is compliant. A third of consumers have looked at the AI being pushed into their phones and laptops and said, plainly, no. Workers whose organisations value work-life balance report 28 per cent lower AI fatigue, according to BCG’s research, which suggests the problem is less about the technology itself than about the culture of compulsive adoption wrapped around it.

The question is not whether AI is useful. In certain applications, it clearly is. The question is whether the frenzy surrounding it, the relentless pressure to adopt, integrate, and accelerate, is making us smarter or just making us more compliant.

Sixty-seven billion dollars in quarterly investment. Record mentions on earnings calls. Entire conferences dedicated to the word “intelligence.”

And in a January survey, the most common reason a human being gave for not wanting any of it was four words long: I do not need it. That sentence, quiet and unimpressed, may be the most intelligent thing anyone has said about AI in years. The question now is whether we still have the attention span to hear it.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Smartphones have amazing cameras, but I’m not happy with any of them out of the box. I have to tweak a few things. If you have a Samsung Galaxy phone, these settings won’t magically transform your main camera into an entirely new piece of hardware, but it can put you in a position to capture the best photos your phone can muster.

Turn on the composition guide

Alignment is easier when you can see lines

Grid lines visible using the composition guide feature in the Galaxy Z Fold 6 camera app. Credit: Bertel King / How-To Geek

Much of what makes a good photo has little to do with how many megapixels your phone puts out. It’s all about the fundamentals, like how you compose a shot. One of the most important aspects is the placement of your subject.

Whether you’re taking a picture of a person, a pet, a product, or a plant, placement is everything. Is the photo actually centered? Or, if you’re trying to cultivate more visual interest, are you adhering to the rule of thirds (which is not to suggest that the rule of thirds is an end-all, be-all)? In either case, having an on-screen grid makes all the difference.

To turn on the grid, tap on the menu icon and select the settings cog. Then scroll down until you see Composition guide and tap the toggle to turn it on.

Going forward, whenever you open your camera, you will see a Tic Tac Toe-shaped grid on your screen. Now, instead of merely raising your phone and snapping the shot, take the time to make sure everything is aligned.

Take advantage of your camera’s max resolution

Having more pixels means you can capture more detail

I have a Samsung Galaxy Z Fold 6. The camera hardware on my book-style foldable phone is identical to that of the Galaxy S24 released in the same year, which hasn’t changed much for the Galaxy S25 or the Galaxy S26 released since. On each of these phones, however, the camera app isn’t taking advantage of the full 50MP that the main lens can produce. Instead, photos are binned down to 12MP. The same thing happens even if you have the 200MP camera found on the Galaxy S26 Ultra and the Galaxy Z Fold 7.

To take photos at the maximum resolution, open the camera app and look for the words “12M” written at either the top or side of your phone, depending on how you’re holding it. The numbers will appear right next to the indicator that toggles whether your flash is on or off. For me, tapping here changes the text from 12M to 50M.

Photo resolution toggle in the camera app of a Samsung Galaxy Z Fold 6. Credit: Bertel King / How-To Geek

But wait, we aren’t done yet. To save storage, your phone may revert back to 12MP once you’re done using the app. After all, 12MP is generally enough for most quick snaps and looks just fine on social media, along with other benefits that come from binning photos. But if you want to know that your photos will remain at a higher resolution when you open the camera app, return to camera settings like we did to enable the composition guide, then scroll down until you see Settings to keep. From there, select High picture resolutions.

Use volume keys to zoom in and out

Less reason to move your thumb away from the shutter button

Using volume keys to zoom in the camera app on a Samsung Galaxy Z Fold 6. Credit: Bertel King / How-To Geek

Our phones come with the camera icon saved as one of the favorites we see at the bottom of the homescreen. I immediately get rid of this icon. When I want to take a photo, I double-tap the power button instead.

Physical buttons come in handy once the app is open as well. By default, pressing the volume keys will snap a photo. Personally, I just tap the shutter button on the screen, since my thumb hovers there anyway. In that case, what’s something else the volume keys can do? I like for them to control zoom. I don’t zoom often enough to remember whether my gesture or swipe will zoom in or out, and I tend to overshoot the level of zoom I want. By assigning this to the volume keys, I get a more predictable and precise degree of control.

To zoom in and out with the volume keys, open the camera settings and select Shooting methods > Press Volume buttons to. From here, you can change “Take picture or record video” to “Zoom in or out.”

Adjust exposure

Brighten up a photo before you take it

Exposure setting in the camera app on a Samsung Galaxy Z Fold 6. Credit: Bertel King / How-To Geek

The most important aspect of a photo is how much light your lens is able to take in. If there’s too much light, your photo is washed out. If there isn’t enough light, then you don’t have a photo at all.

Exposure allows you to adjust how much light you expose to your phone’s image sensor. If you can see that a window in the background is so bright that none of the details are coming through, you can turn down the exposure. If a photo is so dark you can’t make out the subject, try turning the exposure up. Exposure isn’t a miracle worker—there’s no making up for the benefits of having proper lighting, but knowing how to adjust exposure can help you eke out a usable shot when you wouldn’t have otherwise.

To access exposure, tap the menu button, then tap the icon that looks like a plus and a minus symbol inside of a circle.

From this point, you can scroll up and down (or side to side, if holding the phone vertically) to increase or decrease exposure. If you really want to get creative, you can turn your photography up a notch by learning how to take long exposure shots on your Galaxy phone.


Help your camera succeed

Will changing these settings suddenly turn all of your photos into the perfect shot? No. No camera can do that, even if you spend thousands of dollars to buy it. But frankly, I take most of my photos for How-To Geek using my phone, and these settings help me get the job done.

Samsung Galaxy Z Fold 7 on a white background.

Brand

Samsung

RAM

12GB

Storage

256GB

Battery

4,400mAh

Operating System

One UI 8

Connectivity

5G, LTE, Wi-Fi 7, Bluetooth 5.4

Samsung’s thinnest and lightest Fold yet feels like a regular phone when closed and a powerful multitasking machine when open. With a brighter 8-inch display and on-device Galaxy AI, it’s ready for work, play, and everything in between.




Source link