These 5 graphics settings look incredible but won’t tank your FPS


We’re now in an era where fancy rendering techniques like ray tracing and even full path tracing tend to steal the show. I get it—they look incredible in screenshots, tech demos, and gameplay videos on high-end machines, but in practice, they bring even the most powerful systems to their knees.

Even traditional options like shadows and volumetric lighting can easily eat up a big chunk of your GPU’s frame rate budget. For most players, these features are a premium that they just can’t or don’t want to afford. The good news is that not every visual upgrade comes with that kind of performance cost. Here are a few options that provide visual improvements while barely touching performance.

Sharpen distant objects for a negligible cost with anisotropic filtering

A noticeable improvement that costs almost nothing

Anisotropic filtering is one of those options that seems to be available in every game imaginable, yet nobody really knows what it does. In short, the feature improves how distant objects look when viewed at an angle. With older techniques like bilinear and trilinear texture filtering, objects in the distance can look muddy and squished, an issue that’s quickly fixed by turning anisotropic filtering up to x16.

The way anisotropic filtering works is by taking the camera angle into consideration. Instead of treating every surface like a flat square, anisotropic filtering stretches the sampling to match the slant of distant objects. When you change the sampling level from, say, x8 to x16, you’re telling the graphics card to “look” at the angled surface 16 times instead of 8, which gives it more opportunities to correctly reconstruct detail in the distance.

An image comparing trilinear vs anisotropic filtering. Credit: Intel

This sounds like it would have a massive impact on performance, but it actually barely has any. The reason it’s so efficient is that all it does is add more samples to the data the GPU already accesses, a task that’s easily handled by the ultra-fast VRAM pipeline.

Despite the minimal performance hit, anisotropic filtering can significantly improve how a game looks, especially in open-world games with lots of distant objects and fast motion, like when driving cars or riding horses.

Get a nicer-looking world without slowing down your GPU by increasing texture quality

Environments and characters look better, and FPS stays the same

Cyberpunk with medium details on the left and ultra details on the right.

Texture quality sounds like it would be an incredibly taxing graphics option. However, the performance impact depends almost entirely on your VRAM. All increasing texture quality does is change the resolution of the texture files. As long as you have enough VRAM to store the high-resolution textures, the performance impact of changing this setting is negligible in the vast majority of games.

If a game requires 8GB of VRAM for ultra textures, and you have a graphics card with 8GB or more, you should max the setting out. Conversely, you should never try to force higher-resolution textures if your graphics card lacks VRAM, as this can cause stuttering when your system has to move textures from the much slower system RAM to the graphics card’s VRAM.



















Quiz
8 Questions · Test Your Knowledge

Gaming Tech and Render Pipelines
Trivia Challenge

From pixelated polygons to ray-traced masterpieces — how well do you know the tech powering PC gaming’s visual evolution?

EnginesHardwareGraphics APIsTechniquesSettings

Which graphics API did Microsoft introduce with Windows 95 to replace its earlier WinG library and become the dominant PC gaming standard through the early 2000s?

Correct! Direct3D, part of the DirectX suite, debuted in 1995 and quickly became the go-to API for PC game developers. Its tight integration with Windows hardware abstraction helped it dominate over competitors like 3dfx’s proprietary Glide API.

Not quite — the answer is Direct3D (DirectX). While OpenGL and Glide were real competitors in that era, Microsoft’s Direct3D won the platform war largely because it shipped with every copy of Windows and had strong OEM hardware support.

In real-time 3D rendering, what technique simulates the appearance of complex surface detail by manipulating how light interacts with a texture, without adding actual geometry?

Correct! Normal mapping encodes surface normal directions into a texture, tricking the lighting system into thinking a surface has bumps and grooves that don’t actually exist in the mesh. It became a cornerstone technique starting around the mid-2000s to add visual richness without a polygon budget cost.

Not quite — the answer is normal mapping. Tessellation actually does add geometry, mipmapping is about texture resolution at distance, and ambient occlusion shades crevices. Normal mapping is specifically the trick of faking surface detail through light interaction via encoded normal vectors.

Which game engine, originally developed for the 1998 shooter ‘Half-Life,’ was later updated to power ‘Half-Life 2’ in 2004 and became widely licensed by indie and mid-tier developers?

Correct! Valve’s GoldSrc engine powered the original Half-Life and was itself a heavily modified version of id Tech 2. Valve then built the Source engine for Half-Life 2, introducing advances like facial animation, physics via Havok, and HDR lighting — and licensed it to many developers throughout the 2000s.

Not quite — the answer is GoldSrc/Source. Id Tech 3 powered Quake III, Unreal Engine 2 was Epic’s competing product, and CryEngine 1 debuted with Far Cry in 2004. Valve’s Source engine stood out for its physics integration and expressive character rendering at the time.

NVIDIA’s GeForce 256, released in 1999, was marketed as the world’s first GPU. What key rendering task did it move from the CPU to dedicated on-card hardware for the first time?

Correct! Transform and Lighting — calculating how 3D vertices move and how light affects them — had previously been handled by the CPU. The GeForce 256 offloaded this to the GPU, dramatically freeing up the CPU and allowing far more complex lit scenes, which is why NVIDIA coined the term ‘GPU’ to distinguish it from earlier 3D accelerators.

Not quite — the answer is Transform and Lighting (T&L). Pixel shaders came later with DirectX 8 hardware, and shadow maps as a GPU-accelerated feature came later still. The GeForce 256’s defining innovation was specifically handling geometry transformation and per-vertex lighting calculations on-chip.

Ray tracing simulates realistic lighting by tracing paths of light rays. What was the primary reason real-time ray tracing was considered impractical for games before NVIDIA’s Turing (RTX 20-series) architecture in 2018?

Correct! Ray tracing requires casting potentially thousands of rays per pixel to calculate reflections, shadows, and global illumination, which is enormously expensive. Earlier GPUs had no dedicated hardware for this work, making it tens or hundreds of times too slow for real-time framerates. Turing introduced RT Cores specifically to accelerate ray-box and ray-triangle intersection tests.

Not quite — the core issue was raw computational cost on traditional shader hardware. While DirectX 12 Ultimate did formalize ray tracing support and engine integration took time, the fundamental bottleneck was always that without dedicated RT hardware, GPUs couldn’t trace enough rays per second to hit playable frame rates.

When you enable 16x Anisotropic Filtering in a PC game’s graphics settings, what specific visual problem does it primarily correct?

Correct! Without anisotropic filtering, textures on surfaces at oblique angles — like a road stretching into the distance — become blurry and washed out because standard bilinear or trilinear filtering samples the texture equally in both axes. Anisotropic filtering samples more heavily along the axis of the angle, preserving sharpness and detail dramatically.

Not quite — anisotropic filtering specifically fixes texture blurring on surfaces at steep viewing angles. Jagged polygon edges are addressed by anti-aliasing, z-fighting is a depth buffer precision issue, and LOD pop-in is managed by level-of-detail systems. AF is purely about keeping textures crisp when viewed at sharp angles.

Which rendering technique, popularized by games like ‘Crysis’ (2007) and later widely adopted, calculates how much ambient light reaches a surface point based on surrounding geometry to create realistic soft shadowing in crevices and corners?

Correct! SSAO, introduced to real-time rendering in Crysis, approximates ambient occlusion by analyzing the depth buffer in screen space to detect nearby geometry. It adds subtle darkening in corners, under objects, and in creases that makes scenes feel far more grounded and three-dimensional without the cost of full global illumination.

Not quite — the answer is Screen Space Ambient Occlusion (SSAO). SSR handles mirror-like reflections, subsurface scattering simulates light passing through skin and wax, and PCF is a shadow map softening technique. SSAO’s signature contribution is that soft contact shadowing in nooks and crannies that makes lighting feel physically believable.

Epic Games’ Unreal Engine 5, released in 2022, introduced two headline rendering technologies. ‘Lumen’ handles dynamic global illumination, but what is the name of the system that streams in near-infinite geometric detail using micropolygons instead of traditional LOD meshes?

Correct! Nanite is UE5’s virtualized geometry system that allows artists to import film-quality assets with millions of polygons, with the engine automatically culling and streaming only the triangles visible on screen at the required resolution. It effectively eliminates the need to hand-craft LOD levels for static meshes, a workflow that had been standard since the earliest 3D games.

Not quite — the answer is Nanite. Megascans is Quake’s photogrammetry asset library (also owned by Epic), Chaos is Unreal’s physics and destruction system, and MetaHuman is Epic’s digital human creator tool. Nanite is specifically the breakthrough virtualized micropolygon geometry renderer that made polygon budgets largely obsolete.

Challenge Complete

Your Score

/ 8

Thanks for playing!

Despite the minimal performance impact, changing texture quality can be a massive visual boost. Everything from the environment to your character’s armor and weapons looks more detailed. It’s like looking at a 1080p photo compared to an 8K one—higher-resolution textures just look better.

I have to mention one caveat, though: not all games treat the texture quality setting equally. Sometimes the option is bundled together with other settings like parallax occlusion mapping or mesh quality (which affects things like hair and foliage), which themselves require more processing power and can cause a small performance drop. If we’re just talking about textures, though—max that option out if you have enough VRAM.

A graphic showing the flow of data between RAM and GPU.


Why 8GB VRAM is finally dead for PC gaming in 2025

Is your GPU’s memory holding you back from gaming glory? The truth about VRAM usage and performance may surprise you.

Depth doesn’t always require sacrificing FPS

Screen space ambient occlusion (SSAO) is a technique that essentially “fakes” complex lighting—the developers behind Crysis were among the first to use it because we didn’t have hardware powerful enough for ray tracing. SSAO looks at the depth of pixels and estimates how much light would reach corners to create shadowing. It essentially adds soft contact shadows to corners and around objects.

SSAO is the simplest form of ambient occlusion, and as such, it’s by far the easiest to calculate. Depending on the game, you might see a modest 1–2% performance impact, though some titles can experience FPS drops of around 6–7%.

Even if the performance impact isn’t always negligible, it makes such a noticeable visual improvement that it’s hard to recommend turning it off if you’re getting playable FPS. I only suggest disabling it in esports titles where every frame counts and, more importantly, where shadows can make it harder to spot enemies.

Note that SSAO is sometimes part of the broader post-processing effects stack, which generally doesn’t have a massive performance impact.

Options like motion blur, depth of field, bloom, film grain, vignette, and various color grading modes like the Cinema Siciliano mode in Mafia: The Old Country, Kurosawa Mode in Ghost of Tsushima, and render modes in The Last of Us Part I and II can have a strong visual impact while barely affecting FPS.

Smooth out jagged edges with basic anti-aliasing techniques

Say goodbye to edge shimmer, even on low-end machines

A screenshot from Battlefield V with TAA enabled.

The goal of anti-aliasing is to reduce the jagged, stair-step-looking edges on objects. In my opinion, this is often the single most striking graphics option in the menu, especially if the game has a lot of jagged edges by default. Traditional MSAA (multi-sample AA) and SSAA (super-sample AA) often deliver the best results, but they come with a massive performance hit. SSAA essentially renders the game at a higher resolution and then shrinks it down to reduce jagged edges. MSAA does something similar, except it only increases the resolution around objects. Nevertheless, it’s a demanding option, and even setting MSAA as low as x4 can cut your FPS in half.

Fortunately, developers have come up with alternative AA techniques that deliver decent visual results at a fraction of the performance cost. FXAA (fast approximate AA) looks for jagged edges and blurs them. Depending on the game, this can be an incredibly efficient way to eliminate jagged edges, but if it’s poorly implemented, it can make the whole image look like someone rubbed petroleum jelly across your screen.

TAA (temporal AA) provides much better results because it uses data from previous frames to smooth out edges, resulting in less blur than FXAA while keeping the performance impact minimal. However, it can still introduce some blur, especially when moving the camera quickly.

A solid middle ground is upscalers, which are sometimes listed as an option in the AA menu, like FSR (AMD), DLSS (NVIDIA), and DLAA (NVIDIA). FSR and DLSS render the game at a lower resolution and then upscale it to your monitor’s native resolution while also reducing jagged edges using a technique similar to TAA.

DLAA works similarly to DLSS, but runs at native resolution to provide better image quality at the cost of FPS.

Use image sharpening to get a crisp image without losing a single frame

Reveal more fine detail without affecting performance

Image sharpening has become a staple option in modern AAA games. This is because these games typically support upscaling, which you’ll generally want to use for better performance (even if you’re getting decent FPS without it).

However, depending on how powerful your graphics card is and your exact upscaling settings, some games can appear blurry. This problem is easily fixed by increasing the sharpness.

You can turn it up as much as you like until the game looks crisp to you, and it still won’t impact performance (just don’t push it to 100% if you don’t want your games to look deep-fried). On my AMD graphics card, I have Radeon Image Sharpening enabled in all games and set to 30% in the driver options because I find that games look much better with some sharpening.

Don’t leave free visual upgrades on the table

I hope you’ve seen that there are plenty of graphics settings that can deliver a noticeable visual improvement while keeping the performance impact minimal. Keep in mind that how much each setting affects performance varies widely depending on the game. So, whether you have a high-end machine, a potato PC, or something in between, I strongly encourage you to look up the performance impact of individual settings in your favorite games and turn up the ones that won’t hurt your FPS.

The EVGA NVIDIA GeForce GTX 970 SSC GAMING ACX 2.0 graphics card sitting on a desk.


Nvidia quietly moved the legendary GTX 1080 Ti and 4 other classic GPUs to legacy status

5 iconic Nvidia GPUs you must replace now that official driver support is dead



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


After being teased in the second beta, the new “Bubbles” feature is finally available in Android 17 Beta 3. This is the biggest change to Android multitasking since split-screen mode. I had to see how it worked—come along with me.

Now, it should be mentioned that this feature will probably look a bit familiar to Samsung Galaxy owners. One UI also allows for putting apps in floating windows, and they minimize into a floating widget. However, as you’ll see, Google’s approach is more restrained.

App Bubbles in Android 17

There’s a lot to like already

First and foremost, putting an app in a “Bubble” allows it to be used on top of whatever’s happening on the screen. The functionality is essentially identical to Android’s older feature of the exact same name, but now it can be used for apps in addition to messaging conversations.

To bubble an app, simply long-press the app icon anywhere you see it. That includes the home screen, app drawer, and the taskbar on foldables and tablets. Select “Bubble” or the small icon depicting a rectangle with an arrow pointing at a dot in the menu.

Bubbles on a phone screen

The app will immediately open in a floating window on top of your current activity. This is the full version of the app, and it works exactly how it would if you opened it normally. You can’t resize the app bubble, but on large-screen devices, you can choose which side it’s on. To minimize the bubble, simply tap outside of it or do the Home gesture—you won’t actually go to the Home Screen.

Multiple apps can be bubbled together—just repeat the process above—but only one can be shown at a time. This is a key difference compared to One UI’s pop-up windows, which can be resized and tiled anywhere on the screen. Here is also where things vary depending on the type of device you’re using.

If you’re using a phone, the current bubbled apps appear in a row of shortcuts above the window. Tap an app icon, and it will instantly come into view within the bubble. On foldables and tablets, the row of icons is much smaller and below the window.

Another difference is how the app bubbles are minimized. On phones, they live in a floating app icon (or stack of icons) on the edge of the screen. You are free to move this around the screen by dragging it. Tapping the minimized bubble will open the last active app in the bubble. On foldables and tablets, the bubble is minimized to the taskbar (if you have it enabled).

Bubbles on a foldable screen

Now, there are a few things to know about managing bubbles. First, tapping the “+” button in the shortcuts row shows previously dismissed bubbles—it’s not for adding a new app bubble. To dismiss an app bubble, you can drag the icon from the shortcuts row and drop it on the “X” that appears at the bottom of the screen.

To remove the entire bubble completely, simply drag it to the “X” at the bottom of the screen. On phones, there’s also an extra “Manage” button below the window with a “Dismiss bubble” option.

Better than split-screen?

Bubbles make sense on smaller screens

That’s pretty much all there is to it. As mentioned, there’s definitely not as much freedom with Bubbles as there is with pop-up windows in One UI. The latter allows you to treat apps like windows on a computer screen. Bubbles are a much more confined experience, but the benefit is that you don’t have to do any organizing.

Samsung One UI pop-up windows

Of course, Android has supported using multiple apps at once with split-screen mode for a while. So, what’s the benefit of Bubbles? On phones, especially, split-screen mode makes apps so small that they’re not very useful.

If you’re making a grocery list while checking the store website, you’re stuck in a very small browser window. Bubbles enables you to essentially use two apps in full size at the same time—it’s even quicker than swiping the gesture bar to switch between apps.

If you’d like to give App Bubbles a try, enroll your qualified Pixel phone in the Android Beta Program. The final release of Android 17 is only a few months away (Q2 2026), but this is an exciting feature to check out right now.

A desktop setup featuring an Android phone, monitor, and mascot, surrounded by red 'missing' labels


Android’s new desktop mode is cool, but it still needs these 5 things

For as long as Android phones have existed, people have dreamed of using them as the brains inside a desktop computing setup. Samsung accomplished this nearly a decade ago, but the rest of the Android world has been left out. Android 17 is finally changing that with a new desktop mode, and I tried it out.



Source link