NVIDIA keeps neglecting frame gen on Linux, so I turned to this $7 alternative instead


Gaming on Linux is better today than it ever has been, but that doesn’t mean it is perfect. NVIDIA’s drivers are notoriously finicky, especially when you want to use more advanced features like DLSS or Frame Generation.

After one too many times fighting with a configuration file, I decided to try a third-party alternative instead.

NVIDIA’s frame gen is hit-and-miss on Linux

It has gotten better, but it isn’t perfect

NVIDIA’s graphics drivers on Linux, which include Frame Generation and DLSS, have improved dramatically over the last few years. A lot of the credit goes to Valve for their work on Proton—without it, Frame Generation wouldn’t be possible on Linux at all.

However, despite the significant improvements, Frame Generation (and DLSS) on Linux is still unreliable. Sometimes, after an update to Proton or your NVIDIA drivers, the option to enable Frame Gen disappears completely. On a handful of occasions, I’ve had to use experimental versions of Proton or track down specific flags to enable Frame Gen at all.



















Quiz
8 Questions · Test Your Knowledge

Gaming Tech and Render Pipelines
Trivia Challenge

From pixelated polygons to ray-traced masterpieces — how well do you know the tech powering PC gaming’s visual evolution?

EnginesHardwareGraphics APIsTechniquesSettings

Which graphics API did Microsoft introduce with Windows 95 to replace its earlier WinG library and become the dominant PC gaming standard through the early 2000s?

Correct! Direct3D, part of the DirectX suite, debuted in 1995 and quickly became the go-to API for PC game developers. Its tight integration with Windows hardware abstraction helped it dominate over competitors like 3dfx’s proprietary Glide API.

Not quite — the answer is Direct3D (DirectX). While OpenGL and Glide were real competitors in that era, Microsoft’s Direct3D won the platform war largely because it shipped with every copy of Windows and had strong OEM hardware support.

In real-time 3D rendering, what technique simulates the appearance of complex surface detail by manipulating how light interacts with a texture, without adding actual geometry?

Correct! Normal mapping encodes surface normal directions into a texture, tricking the lighting system into thinking a surface has bumps and grooves that don’t actually exist in the mesh. It became a cornerstone technique starting around the mid-2000s to add visual richness without a polygon budget cost.

Not quite — the answer is normal mapping. Tessellation actually does add geometry, mipmapping is about texture resolution at distance, and ambient occlusion shades crevices. Normal mapping is specifically the trick of faking surface detail through light interaction via encoded normal vectors.

Which game engine, originally developed for the 1998 shooter ‘Half-Life,’ was later updated to power ‘Half-Life 2’ in 2004 and became widely licensed by indie and mid-tier developers?

Correct! Valve’s GoldSrc engine powered the original Half-Life and was itself a heavily modified version of id Tech 2. Valve then built the Source engine for Half-Life 2, introducing advances like facial animation, physics via Havok, and HDR lighting — and licensed it to many developers throughout the 2000s.

Not quite — the answer is GoldSrc/Source. Id Tech 3 powered Quake III, Unreal Engine 2 was Epic’s competing product, and CryEngine 1 debuted with Far Cry in 2004. Valve’s Source engine stood out for its physics integration and expressive character rendering at the time.

NVIDIA’s GeForce 256, released in 1999, was marketed as the world’s first GPU. What key rendering task did it move from the CPU to dedicated on-card hardware for the first time?

Correct! Transform and Lighting — calculating how 3D vertices move and how light affects them — had previously been handled by the CPU. The GeForce 256 offloaded this to the GPU, dramatically freeing up the CPU and allowing far more complex lit scenes, which is why NVIDIA coined the term ‘GPU’ to distinguish it from earlier 3D accelerators.

Not quite — the answer is Transform and Lighting (T&L). Pixel shaders came later with DirectX 8 hardware, and shadow maps as a GPU-accelerated feature came later still. The GeForce 256’s defining innovation was specifically handling geometry transformation and per-vertex lighting calculations on-chip.

Ray tracing simulates realistic lighting by tracing paths of light rays. What was the primary reason real-time ray tracing was considered impractical for games before NVIDIA’s Turing (RTX 20-series) architecture in 2018?

Correct! Ray tracing requires casting potentially thousands of rays per pixel to calculate reflections, shadows, and global illumination, which is enormously expensive. Earlier GPUs had no dedicated hardware for this work, making it tens or hundreds of times too slow for real-time framerates. Turing introduced RT Cores specifically to accelerate ray-box and ray-triangle intersection tests.

Not quite — the core issue was raw computational cost on traditional shader hardware. While DirectX 12 Ultimate did formalize ray tracing support and engine integration took time, the fundamental bottleneck was always that without dedicated RT hardware, GPUs couldn’t trace enough rays per second to hit playable frame rates.

When you enable 16x Anisotropic Filtering in a PC game’s graphics settings, what specific visual problem does it primarily correct?

Correct! Without anisotropic filtering, textures on surfaces at oblique angles — like a road stretching into the distance — become blurry and washed out because standard bilinear or trilinear filtering samples the texture equally in both axes. Anisotropic filtering samples more heavily along the axis of the angle, preserving sharpness and detail dramatically.

Not quite — anisotropic filtering specifically fixes texture blurring on surfaces at steep viewing angles. Jagged polygon edges are addressed by anti-aliasing, z-fighting is a depth buffer precision issue, and LOD pop-in is managed by level-of-detail systems. AF is purely about keeping textures crisp when viewed at sharp angles.

Which rendering technique, popularized by games like ‘Crysis’ (2007) and later widely adopted, calculates how much ambient light reaches a surface point based on surrounding geometry to create realistic soft shadowing in crevices and corners?

Correct! SSAO, introduced to real-time rendering in Crysis, approximates ambient occlusion by analyzing the depth buffer in screen space to detect nearby geometry. It adds subtle darkening in corners, under objects, and in creases that makes scenes feel far more grounded and three-dimensional without the cost of full global illumination.

Not quite — the answer is Screen Space Ambient Occlusion (SSAO). SSR handles mirror-like reflections, subsurface scattering simulates light passing through skin and wax, and PCF is a shadow map softening technique. SSAO’s signature contribution is that soft contact shadowing in nooks and crannies that makes lighting feel physically believable.

Epic Games’ Unreal Engine 5, released in 2022, introduced two headline rendering technologies. ‘Lumen’ handles dynamic global illumination, but what is the name of the system that streams in near-infinite geometric detail using micropolygons instead of traditional LOD meshes?

Correct! Nanite is UE5’s virtualized geometry system that allows artists to import film-quality assets with millions of polygons, with the engine automatically culling and streaming only the triangles visible on screen at the required resolution. It effectively eliminates the need to hand-craft LOD levels for static meshes, a workflow that had been standard since the earliest 3D games.

Not quite — the answer is Nanite. Megascans is Quake’s photogrammetry asset library (also owned by Epic), Chaos is Unreal’s physics and destruction system, and MetaHuman is Epic’s digital human creator tool. Nanite is specifically the breakthrough virtualized micropolygon geometry renderer that made polygon budgets largely obsolete.

Challenge Complete

Your Score

/ 8

Thanks for playing!

Even when you can enable it, you’ll find a lot of complaints about wildly inconsistent frame rates, jittery or distorted interfaces, or performance far below what you’d get on Windows.

That problem is exacerbated by the fact that not all RTX cards can use every version of DLSS or Frame Gen. What works for someone with an RTX 2070 might be different from someone that is using an RTX 5070 Ti.

Those inconsistencies ultimately led me to look for something more reliable.

There is a $7 third-party alternative

lsfg-vk builds on Lossless Scaling

Lossless Scaling logo
Lossless Scaling

Lossless Scaling is a popular Windows application that brings frame generation and upscaling to almost any PC—no modern GPU with hardware support for AI features required. I use it on my laptop all the time, and it can often turn an unplayable game into a decent one.

There is only one major snag for Linux users: Lossless Scaling is only for Windows.

That is where lsfg-vk comes in. Lsfg-vk relies on the frame generation algorithm included with Lossless Scaling, but it hooks into the Vulkan API to add interpolated frames. That sounds limiting, since many games—especially older ones—rely on DirectX rather than Vulkan.

However, Proton includes two translation layers (DXVK and VKD3D) that automatically convert DirectX API calls into Vulkan calls. That means you can use frame generation with almost any Windows game on Linux, even games that don’t natively use Vulkan.

It isn’t just NVIDIA—it works with everything

AMD, Intel, and Integrated GPUs can all use it

Lsfg-vk shares another thing with Lossless Scaling: it runs on almost any modern hardware.

The only strict requirement is that the GPU needs to support Vulkan 1.3, but that is pretty easy to meet. Vulkan 1.3 has been around since 2022. If you have a GPU made in the last 10 years, it is very likely that lsfg-vk will work for you.

Besides that, lsfg-vk isn’t too picky about your GPU manufacturer—it’ll run on AMD, NVIDIA, and Intel without a problem. I use it on both my laptop (which has an AMD integrated GPU) and my desktop, which has an RTX 5070 Ti in it.

Tux mascot jumping from Windows to Linux.


I Tried Installing Linux on a Surface Laptop, Here’s How It Went

I haven’t missed Windows yet.

In general, AMD GPUs tend to get the biggest performance increase, since lsfg-vk has a specific option (allow_fp16) that AMD cards benefit from, while NVIDIA cards and Intel cards don’t.

lsfg-vk works on handhelds like the Steam Deck

Lsfg-vk also works on any x86-based handheld gaming platform, which includes Asus’s ROG Ally lineup, the Steam Deck, and Lenovo’s Legion Go. It has become so popular that there is a dedicated Decky plugin that makes installing and using lsfg-vk easier on Steam Decks.

Not every game is a great candidate for frame generation on a handheld, but it can dramatically improve your performance in some titles. If you’re struggling to get the performance you want, I’d certainly recommend that you try it.

Getting lsfg-vk working on Linux

Some configuration required

Lossless scaling being installed via Steam.

Lsfg-vk isn’t quite a one-click setup, but it is pretty straight-forward. First, you need to have Lossless Scaling on Steam. Lsfg-vk relies on some of Lossless Scaling’s assets to work.

Once you have Lossless Scaling installed, there are some assets you might need to download in advance. I’m using Kubuntu, which is based on Ubuntu, so I ran:

sudo apt install qt6-qpa-plugins libqt6quick6 qml6-module-qtquick-controls qml6-module-qtquick-layouts qml6-module-qtquick-window qml6-module-qtquick-dialogs qml6-module-qtqml-workerscript qml6-module-qtquick-templates qml6-module-qt-labs-folderlistmodel

If you’re running a distro based on Arch or Fedora (like Bazzite), the commands you need to use will be different.

With those installed, all you need to do is download the latest stable version of lsfg-vk from GitHub and run the installer using the following command:

sudo apt install ./lsfg-vk-1.0.0.x86_64.deb

If you’re using Fedora or Arch, you’ll need to use DNF or Pacman respectively.

Installing the DEB file for lsfg-vk using the Terminal.

Then you’ll be able to launch lsfg-vk from whichever application launcher your distro uses. I’d recommend creating a profile for each game you’re going to be playing, since not every game is an ideal candidate for frame generation.

A basic Skryim profile in the lsfg-vk Configuration Menu.

The developer has provided a comprehensive explanation of what each profile setting does and how it will affect your gameplay.

I’d recommend reading it if you run into trouble.


Lsfg-vk is a great feature for Linux gamers

Gaming on Linux still isn’t perfect, but software like lsfg-vk and Proton are rapidly closing the gap with Windows. Some games now even run *better* on Linux—something that was unthinkable 10 years ago.

For everything that doesn’t, lsfg-vk is a great way to eke out some extra frames in the meantime.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link