Nvidia DLSS 5 might be the future of graphics, and I still want a giant “Off” button



For years, photo-realism was seen as the ultimate goal for next-gen games. Ray-tracing was a solid step forward. And then came super-resolution and super-sampling upgrades. Yet, when Nvidia showcased its next great advancement for video game visuals, the fifth-gen Deep Learning Super Sampling, it stirred a furor. Interestingly, DLSS 5 is not just another version of DLSS with a few cleaner edges and a better performance story.

Nvidia is pitching it as a real-time neural rendering model that can add more photoreal lighting and material detail to a game frame, which is a much bigger shift than plain upscaling. That’s a bold technical swing, and a risky aesthetic one. It sounds impressive, and to be fair, part of it genuinely is. If DLSS 5 works as intended, it could help games look richer without developers brute-forcing every lighting effect the traditional way.

Announced at GTC, DLSS 5 is set to release in the fall of 2026 as Nvidia’s biggest graphics leap since real-time ray tracing. But the first reaction wasn’t applause, it was memes about “AI faces”, “AI slop“, and “yassified” characters. While Nvidia insists we’re all wrong, it still begs the question: do we actually need this?

What does DLSS 5 even do, and is it actually useful?

Nvidia says DLSS 5 takes each frame rendered by the game, plus motion data, to generate more photoreal lighting and materials in real time. On paper, it should better handle things like skin, hair, and fabric. The company is also positioning it as part of a broader neural rendering future, rather than a one-off gimmick. For photoreal games chasing more realistic lighting, this is a compelling pitch.

This isn’t meant to be a blind, one-click beauty filter either. Developers are supposed to get full control over intensity, color grading, and masking. DLSS 5 also integrates through Nvidia Streamline, meaning studios can decide exactly where the effect applies (and where it doesn’t).

There is a fair pro-DLSS 5 argument here. Traditional rendering is expensive, especially when developers want cinematic lighting without sacrificing frame rates. A tool that can bridge some of that gap could absolutely benefit players, particularly in big-budget, realistic single-player games.

If it’s so advanced, why does it keep getting called an AI filter?

It didn’t help that at the sidelines of GTC, Nvidia chief Jensen Huang said gamers are getting it completely wrong with DLSS5. But if that’s the case, why is the criticism almost in unison? That’s because ecause the criticism is not just people yelling “AI bad” on autopilot.

A big reason the “AI filter” label stuck is that some of the public explanations make DLSS 5 closer to smart image reinterpretation than something deeply aware of a game’s full 3D scene. According to Nvidia’s Jacob Freeman, the system takes the rendered frame and motion vectors as inputs, while keeping the underlying geometry unchanged.

That is exactly why critics are uneasy. If DLSS 5 is mainly working from a 2D frame plus motion information, then it is still guessing. And this guesswork is how you end up with that uncanny, over-baked look people immediately noticed in early demos.

Once a GPU feature starts changing facial tone, lighting mood, or the overall feel of a scene, people stop seeing it as a harmless enhancement and start seeing it as aesthetic interference.

Death of artistic intent?

This is the biggest question hanging over DLSS 5. Nvidia CEO Jensen Huang has defended the tech aggressively, emphasizing that developers get full control of intensity, grading, and masking. That all sounds reassuring in theory, but my eyes say otherwise.

In the demo, DLSS 5 noticeably shifts color grading and contrast in ways that make you question whether developers actually opted into those changes.

Resident Evil Requiem has one of the most jarring showcases of this tech, with Grace getting what looks like a subtle makeup applied to her eyes and lips. Other examples, like Starfield, also reinforce this oddly generic look, one that adds “detail” without necessarily adding to the immersion.

Going by various videos and posts online, both gamers and some developers were put off by the beauty-filter effect in character faces. And while Nvidia claims developers will have full control, some were blindsided by the announcement altogether, including people working at major studios like Capcom. One developer at Ubisoft even said, “We found out at the same time as the public.”

When the key selling point becomes “look how much the AI changed this,” it is hard to blame people for asking whether the original art direction is being preserved or overwritten.

Are gamers overreacting or spotting a real problem early?

The community response has been messy, but it is not baseless. Reddit threads are full of people calling DLSS 5 “AI slop,” with valid complaints of the tech wiping out moody lighting, homogenizing visual style, and making games look plasticky or uncanny. These blunt reactions also point to a real fear, where a single AI model could have two very different games have the same glossy Nvidia-approved look.

Are we supposed to actually believe DLSS 5 gives developers control to maintain a game’s “unique aesthetic” when the examples they show completely change the artistic style of some characters?

“Ah yes, this completely different looking person is what I wanted all along!” https://t.co/vSWDw51A29

— Hardware Unboxed (@HardwareUnboxed) March 16, 2026

My take is simple: DLSS is not automatically doomed, and it is not fair to dismiss the tech as worthless. But Nvidia is asking players to trust an AI layer with something more important than frame rate, which is a game’s visual identity. That is a much harder sell.

Until DLSS 5 proves that it can enhance games without making them feel AI-treated, the criticism is not just valid, it is necessary.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Smartphones have amazing cameras, but I’m not happy with any of them out of the box. I have to tweak a few things. If you have a Samsung Galaxy phone, these settings won’t magically transform your main camera into an entirely new piece of hardware, but it can put you in a position to capture the best photos your phone can muster.

Turn on the composition guide

Alignment is easier when you can see lines

Grid lines visible using the composition guide feature in the Galaxy Z Fold 6 camera app. Credit: Bertel King / How-To Geek

Much of what makes a good photo has little to do with how many megapixels your phone puts out. It’s all about the fundamentals, like how you compose a shot. One of the most important aspects is the placement of your subject.

Whether you’re taking a picture of a person, a pet, a product, or a plant, placement is everything. Is the photo actually centered? Or, if you’re trying to cultivate more visual interest, are you adhering to the rule of thirds (which is not to suggest that the rule of thirds is an end-all, be-all)? In either case, having an on-screen grid makes all the difference.

To turn on the grid, tap on the menu icon and select the settings cog. Then scroll down until you see Composition guide and tap the toggle to turn it on.

Going forward, whenever you open your camera, you will see a Tic Tac Toe-shaped grid on your screen. Now, instead of merely raising your phone and snapping the shot, take the time to make sure everything is aligned.

Take advantage of your camera’s max resolution

Having more pixels means you can capture more detail

I have a Samsung Galaxy Z Fold 6. The camera hardware on my book-style foldable phone is identical to that of the Galaxy S24 released in the same year, which hasn’t changed much for the Galaxy S25 or the Galaxy S26 released since. On each of these phones, however, the camera app isn’t taking advantage of the full 50MP that the main lens can produce. Instead, photos are binned down to 12MP. The same thing happens even if you have the 200MP camera found on the Galaxy S26 Ultra and the Galaxy Z Fold 7.

To take photos at the maximum resolution, open the camera app and look for the words “12M” written at either the top or side of your phone, depending on how you’re holding it. The numbers will appear right next to the indicator that toggles whether your flash is on or off. For me, tapping here changes the text from 12M to 50M.

Photo resolution toggle in the camera app of a Samsung Galaxy Z Fold 6. Credit: Bertel King / How-To Geek

But wait, we aren’t done yet. To save storage, your phone may revert back to 12MP once you’re done using the app. After all, 12MP is generally enough for most quick snaps and looks just fine on social media, along with other benefits that come from binning photos. But if you want to know that your photos will remain at a higher resolution when you open the camera app, return to camera settings like we did to enable the composition guide, then scroll down until you see Settings to keep. From there, select High picture resolutions.

Use volume keys to zoom in and out

Less reason to move your thumb away from the shutter button

Using volume keys to zoom in the camera app on a Samsung Galaxy Z Fold 6. Credit: Bertel King / How-To Geek

Our phones come with the camera icon saved as one of the favorites we see at the bottom of the homescreen. I immediately get rid of this icon. When I want to take a photo, I double-tap the power button instead.

Physical buttons come in handy once the app is open as well. By default, pressing the volume keys will snap a photo. Personally, I just tap the shutter button on the screen, since my thumb hovers there anyway. In that case, what’s something else the volume keys can do? I like for them to control zoom. I don’t zoom often enough to remember whether my gesture or swipe will zoom in or out, and I tend to overshoot the level of zoom I want. By assigning this to the volume keys, I get a more predictable and precise degree of control.

To zoom in and out with the volume keys, open the camera settings and select Shooting methods > Press Volume buttons to. From here, you can change “Take picture or record video” to “Zoom in or out.”

Adjust exposure

Brighten up a photo before you take it

Exposure setting in the camera app on a Samsung Galaxy Z Fold 6. Credit: Bertel King / How-To Geek

The most important aspect of a photo is how much light your lens is able to take in. If there’s too much light, your photo is washed out. If there isn’t enough light, then you don’t have a photo at all.

Exposure allows you to adjust how much light you expose to your phone’s image sensor. If you can see that a window in the background is so bright that none of the details are coming through, you can turn down the exposure. If a photo is so dark you can’t make out the subject, try turning the exposure up. Exposure isn’t a miracle worker—there’s no making up for the benefits of having proper lighting, but knowing how to adjust exposure can help you eke out a usable shot when you wouldn’t have otherwise.

To access exposure, tap the menu button, then tap the icon that looks like a plus and a minus symbol inside of a circle.

From this point, you can scroll up and down (or side to side, if holding the phone vertically) to increase or decrease exposure. If you really want to get creative, you can turn your photography up a notch by learning how to take long exposure shots on your Galaxy phone.


Help your camera succeed

Will changing these settings suddenly turn all of your photos into the perfect shot? No. No camera can do that, even if you spend thousands of dollars to buy it. But frankly, I take most of my photos for How-To Geek using my phone, and these settings help me get the job done.

Samsung Galaxy Z Fold 7 on a white background.

Brand

Samsung

RAM

12GB

Storage

256GB

Battery

4,400mAh

Operating System

One UI 8

Connectivity

5G, LTE, Wi-Fi 7, Bluetooth 5.4

Samsung’s thinnest and lightest Fold yet feels like a regular phone when closed and a powerful multitasking machine when open. With a brighter 8-inch display and on-device Galaxy AI, it’s ready for work, play, and everything in between.




Source link