NVIDIA keeps neglecting frame gen on Linux, so I turned to this $7 alternative instead


Gaming on Linux is better today than it ever has been, but that doesn’t mean it is perfect. NVIDIA’s drivers are notoriously finicky, especially when you want to use more advanced features like DLSS or Frame Generation.

After one too many times fighting with a configuration file, I decided to try a third-party alternative instead.

NVIDIA’s frame gen is hit-and-miss on Linux

It has gotten better, but it isn’t perfect

NVIDIA’s graphics drivers on Linux, which include Frame Generation and DLSS, have improved dramatically over the last few years. A lot of the credit goes to Valve for their work on Proton—without it, Frame Generation wouldn’t be possible on Linux at all.

However, despite the significant improvements, Frame Generation (and DLSS) on Linux is still unreliable. Sometimes, after an update to Proton or your NVIDIA drivers, the option to enable Frame Gen disappears completely. On a handful of occasions, I’ve had to use experimental versions of Proton or track down specific flags to enable Frame Gen at all.



















Quiz
8 Questions · Test Your Knowledge

Gaming Tech and Render Pipelines
Trivia Challenge

From pixelated polygons to ray-traced masterpieces — how well do you know the tech powering PC gaming’s visual evolution?

EnginesHardwareGraphics APIsTechniquesSettings

Which graphics API did Microsoft introduce with Windows 95 to replace its earlier WinG library and become the dominant PC gaming standard through the early 2000s?

Correct! Direct3D, part of the DirectX suite, debuted in 1995 and quickly became the go-to API for PC game developers. Its tight integration with Windows hardware abstraction helped it dominate over competitors like 3dfx’s proprietary Glide API.

Not quite — the answer is Direct3D (DirectX). While OpenGL and Glide were real competitors in that era, Microsoft’s Direct3D won the platform war largely because it shipped with every copy of Windows and had strong OEM hardware support.

In real-time 3D rendering, what technique simulates the appearance of complex surface detail by manipulating how light interacts with a texture, without adding actual geometry?

Correct! Normal mapping encodes surface normal directions into a texture, tricking the lighting system into thinking a surface has bumps and grooves that don’t actually exist in the mesh. It became a cornerstone technique starting around the mid-2000s to add visual richness without a polygon budget cost.

Not quite — the answer is normal mapping. Tessellation actually does add geometry, mipmapping is about texture resolution at distance, and ambient occlusion shades crevices. Normal mapping is specifically the trick of faking surface detail through light interaction via encoded normal vectors.

Which game engine, originally developed for the 1998 shooter ‘Half-Life,’ was later updated to power ‘Half-Life 2’ in 2004 and became widely licensed by indie and mid-tier developers?

Correct! Valve’s GoldSrc engine powered the original Half-Life and was itself a heavily modified version of id Tech 2. Valve then built the Source engine for Half-Life 2, introducing advances like facial animation, physics via Havok, and HDR lighting — and licensed it to many developers throughout the 2000s.

Not quite — the answer is GoldSrc/Source. Id Tech 3 powered Quake III, Unreal Engine 2 was Epic’s competing product, and CryEngine 1 debuted with Far Cry in 2004. Valve’s Source engine stood out for its physics integration and expressive character rendering at the time.

NVIDIA’s GeForce 256, released in 1999, was marketed as the world’s first GPU. What key rendering task did it move from the CPU to dedicated on-card hardware for the first time?

Correct! Transform and Lighting — calculating how 3D vertices move and how light affects them — had previously been handled by the CPU. The GeForce 256 offloaded this to the GPU, dramatically freeing up the CPU and allowing far more complex lit scenes, which is why NVIDIA coined the term ‘GPU’ to distinguish it from earlier 3D accelerators.

Not quite — the answer is Transform and Lighting (T&L). Pixel shaders came later with DirectX 8 hardware, and shadow maps as a GPU-accelerated feature came later still. The GeForce 256’s defining innovation was specifically handling geometry transformation and per-vertex lighting calculations on-chip.

Ray tracing simulates realistic lighting by tracing paths of light rays. What was the primary reason real-time ray tracing was considered impractical for games before NVIDIA’s Turing (RTX 20-series) architecture in 2018?

Correct! Ray tracing requires casting potentially thousands of rays per pixel to calculate reflections, shadows, and global illumination, which is enormously expensive. Earlier GPUs had no dedicated hardware for this work, making it tens or hundreds of times too slow for real-time framerates. Turing introduced RT Cores specifically to accelerate ray-box and ray-triangle intersection tests.

Not quite — the core issue was raw computational cost on traditional shader hardware. While DirectX 12 Ultimate did formalize ray tracing support and engine integration took time, the fundamental bottleneck was always that without dedicated RT hardware, GPUs couldn’t trace enough rays per second to hit playable frame rates.

When you enable 16x Anisotropic Filtering in a PC game’s graphics settings, what specific visual problem does it primarily correct?

Correct! Without anisotropic filtering, textures on surfaces at oblique angles — like a road stretching into the distance — become blurry and washed out because standard bilinear or trilinear filtering samples the texture equally in both axes. Anisotropic filtering samples more heavily along the axis of the angle, preserving sharpness and detail dramatically.

Not quite — anisotropic filtering specifically fixes texture blurring on surfaces at steep viewing angles. Jagged polygon edges are addressed by anti-aliasing, z-fighting is a depth buffer precision issue, and LOD pop-in is managed by level-of-detail systems. AF is purely about keeping textures crisp when viewed at sharp angles.

Which rendering technique, popularized by games like ‘Crysis’ (2007) and later widely adopted, calculates how much ambient light reaches a surface point based on surrounding geometry to create realistic soft shadowing in crevices and corners?

Correct! SSAO, introduced to real-time rendering in Crysis, approximates ambient occlusion by analyzing the depth buffer in screen space to detect nearby geometry. It adds subtle darkening in corners, under objects, and in creases that makes scenes feel far more grounded and three-dimensional without the cost of full global illumination.

Not quite — the answer is Screen Space Ambient Occlusion (SSAO). SSR handles mirror-like reflections, subsurface scattering simulates light passing through skin and wax, and PCF is a shadow map softening technique. SSAO’s signature contribution is that soft contact shadowing in nooks and crannies that makes lighting feel physically believable.

Epic Games’ Unreal Engine 5, released in 2022, introduced two headline rendering technologies. ‘Lumen’ handles dynamic global illumination, but what is the name of the system that streams in near-infinite geometric detail using micropolygons instead of traditional LOD meshes?

Correct! Nanite is UE5’s virtualized geometry system that allows artists to import film-quality assets with millions of polygons, with the engine automatically culling and streaming only the triangles visible on screen at the required resolution. It effectively eliminates the need to hand-craft LOD levels for static meshes, a workflow that had been standard since the earliest 3D games.

Not quite — the answer is Nanite. Megascans is Quake’s photogrammetry asset library (also owned by Epic), Chaos is Unreal’s physics and destruction system, and MetaHuman is Epic’s digital human creator tool. Nanite is specifically the breakthrough virtualized micropolygon geometry renderer that made polygon budgets largely obsolete.

Challenge Complete

Your Score

/ 8

Thanks for playing!

Even when you can enable it, you’ll find a lot of complaints about wildly inconsistent frame rates, jittery or distorted interfaces, or performance far below what you’d get on Windows.

That problem is exacerbated by the fact that not all RTX cards can use every version of DLSS or Frame Gen. What works for someone with an RTX 2070 might be different from someone that is using an RTX 5070 Ti.

Those inconsistencies ultimately led me to look for something more reliable.

There is a $7 third-party alternative

lsfg-vk builds on Lossless Scaling

Lossless Scaling logo
Lossless Scaling

Lossless Scaling is a popular Windows application that brings frame generation and upscaling to almost any PC—no modern GPU with hardware support for AI features required. I use it on my laptop all the time, and it can often turn an unplayable game into a decent one.

There is only one major snag for Linux users: Lossless Scaling is only for Windows.

That is where lsfg-vk comes in. Lsfg-vk relies on the frame generation algorithm included with Lossless Scaling, but it hooks into the Vulkan API to add interpolated frames. That sounds limiting, since many games—especially older ones—rely on DirectX rather than Vulkan.

However, Proton includes two translation layers (DXVK and VKD3D) that automatically convert DirectX API calls into Vulkan calls. That means you can use frame generation with almost any Windows game on Linux, even games that don’t natively use Vulkan.

It isn’t just NVIDIA—it works with everything

AMD, Intel, and Integrated GPUs can all use it

Lsfg-vk shares another thing with Lossless Scaling: it runs on almost any modern hardware.

The only strict requirement is that the GPU needs to support Vulkan 1.3, but that is pretty easy to meet. Vulkan 1.3 has been around since 2022. If you have a GPU made in the last 10 years, it is very likely that lsfg-vk will work for you.

Besides that, lsfg-vk isn’t too picky about your GPU manufacturer—it’ll run on AMD, NVIDIA, and Intel without a problem. I use it on both my laptop (which has an AMD integrated GPU) and my desktop, which has an RTX 5070 Ti in it.

Tux mascot jumping from Windows to Linux.


I Tried Installing Linux on a Surface Laptop, Here’s How It Went

I haven’t missed Windows yet.

In general, AMD GPUs tend to get the biggest performance increase, since lsfg-vk has a specific option (allow_fp16) that AMD cards benefit from, while NVIDIA cards and Intel cards don’t.

lsfg-vk works on handhelds like the Steam Deck

Lsfg-vk also works on any x86-based handheld gaming platform, which includes Asus’s ROG Ally lineup, the Steam Deck, and Lenovo’s Legion Go. It has become so popular that there is a dedicated Decky plugin that makes installing and using lsfg-vk easier on Steam Decks.

Not every game is a great candidate for frame generation on a handheld, but it can dramatically improve your performance in some titles. If you’re struggling to get the performance you want, I’d certainly recommend that you try it.

Getting lsfg-vk working on Linux

Some configuration required

Lossless scaling being installed via Steam.

Lsfg-vk isn’t quite a one-click setup, but it is pretty straight-forward. First, you need to have Lossless Scaling on Steam. Lsfg-vk relies on some of Lossless Scaling’s assets to work.

Once you have Lossless Scaling installed, there are some assets you might need to download in advance. I’m using Kubuntu, which is based on Ubuntu, so I ran:

sudo apt install qt6-qpa-plugins libqt6quick6 qml6-module-qtquick-controls qml6-module-qtquick-layouts qml6-module-qtquick-window qml6-module-qtquick-dialogs qml6-module-qtqml-workerscript qml6-module-qtquick-templates qml6-module-qt-labs-folderlistmodel

If you’re running a distro based on Arch or Fedora (like Bazzite), the commands you need to use will be different.

With those installed, all you need to do is download the latest stable version of lsfg-vk from GitHub and run the installer using the following command:

sudo apt install ./lsfg-vk-1.0.0.x86_64.deb

If you’re using Fedora or Arch, you’ll need to use DNF or Pacman respectively.

Installing the DEB file for lsfg-vk using the Terminal.

Then you’ll be able to launch lsfg-vk from whichever application launcher your distro uses. I’d recommend creating a profile for each game you’re going to be playing, since not every game is an ideal candidate for frame generation.

A basic Skryim profile in the lsfg-vk Configuration Menu.

The developer has provided a comprehensive explanation of what each profile setting does and how it will affect your gameplay.

I’d recommend reading it if you run into trouble.


Lsfg-vk is a great feature for Linux gamers

Gaming on Linux still isn’t perfect, but software like lsfg-vk and Proton are rapidly closing the gap with Windows. Some games now even run *better* on Linux—something that was unthinkable 10 years ago.

For everything that doesn’t, lsfg-vk is a great way to eke out some extra frames in the meantime.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


After being teased in the second beta, the new “Bubbles” feature is finally available in Android 17 Beta 3. This is the biggest change to Android multitasking since split-screen mode. I had to see how it worked—come along with me.

Now, it should be mentioned that this feature will probably look a bit familiar to Samsung Galaxy owners. One UI also allows for putting apps in floating windows, and they minimize into a floating widget. However, as you’ll see, Google’s approach is more restrained.

App Bubbles in Android 17

There’s a lot to like already

First and foremost, putting an app in a “Bubble” allows it to be used on top of whatever’s happening on the screen. The functionality is essentially identical to Android’s older feature of the exact same name, but now it can be used for apps in addition to messaging conversations.

To bubble an app, simply long-press the app icon anywhere you see it. That includes the home screen, app drawer, and the taskbar on foldables and tablets. Select “Bubble” or the small icon depicting a rectangle with an arrow pointing at a dot in the menu.

Bubbles on a phone screen

The app will immediately open in a floating window on top of your current activity. This is the full version of the app, and it works exactly how it would if you opened it normally. You can’t resize the app bubble, but on large-screen devices, you can choose which side it’s on. To minimize the bubble, simply tap outside of it or do the Home gesture—you won’t actually go to the Home Screen.

Multiple apps can be bubbled together—just repeat the process above—but only one can be shown at a time. This is a key difference compared to One UI’s pop-up windows, which can be resized and tiled anywhere on the screen. Here is also where things vary depending on the type of device you’re using.

If you’re using a phone, the current bubbled apps appear in a row of shortcuts above the window. Tap an app icon, and it will instantly come into view within the bubble. On foldables and tablets, the row of icons is much smaller and below the window.

Another difference is how the app bubbles are minimized. On phones, they live in a floating app icon (or stack of icons) on the edge of the screen. You are free to move this around the screen by dragging it. Tapping the minimized bubble will open the last active app in the bubble. On foldables and tablets, the bubble is minimized to the taskbar (if you have it enabled).

Bubbles on a foldable screen

Now, there are a few things to know about managing bubbles. First, tapping the “+” button in the shortcuts row shows previously dismissed bubbles—it’s not for adding a new app bubble. To dismiss an app bubble, you can drag the icon from the shortcuts row and drop it on the “X” that appears at the bottom of the screen.

To remove the entire bubble completely, simply drag it to the “X” at the bottom of the screen. On phones, there’s also an extra “Manage” button below the window with a “Dismiss bubble” option.

Better than split-screen?

Bubbles make sense on smaller screens

That’s pretty much all there is to it. As mentioned, there’s definitely not as much freedom with Bubbles as there is with pop-up windows in One UI. The latter allows you to treat apps like windows on a computer screen. Bubbles are a much more confined experience, but the benefit is that you don’t have to do any organizing.

Samsung One UI pop-up windows

Of course, Android has supported using multiple apps at once with split-screen mode for a while. So, what’s the benefit of Bubbles? On phones, especially, split-screen mode makes apps so small that they’re not very useful.

If you’re making a grocery list while checking the store website, you’re stuck in a very small browser window. Bubbles enables you to essentially use two apps in full size at the same time—it’s even quicker than swiping the gesture bar to switch between apps.

If you’d like to give App Bubbles a try, enroll your qualified Pixel phone in the Android Beta Program. The final release of Android 17 is only a few months away (Q2 2026), but this is an exciting feature to check out right now.

A desktop setup featuring an Android phone, monitor, and mascot, surrounded by red 'missing' labels


Android’s new desktop mode is cool, but it still needs these 5 things

For as long as Android phones have existed, people have dreamed of using them as the brains inside a desktop computing setup. Samsung accomplished this nearly a decade ago, but the rest of the Android world has been left out. Android 17 is finally changing that with a new desktop mode, and I tried it out.



Source link