Nvidia’s hidden upscaling feature can fix them


Nvidia’s graphics cards may not be cheap, but they do come with a host of fun features that can come in handy. An obvious software and hardware benefit is DLSS 4.5, but many people missed the memo on a hidden feature that can make old videos look a whole lot better.

Reliant on Nvidia’s Tensor cores, this feature is available in many graphics cards from the last decade. Here’s how you can upscale nearly all videos easily, how it works, and what the caveats are.

What Nvidia’s hidden video setting actually does

It’s not magic, but it helps a whole lot sometimes

Alright, alright, I’ll stop with the suspense: I’m talking about Nvidia’s RTX Video Super Resolution. It does exactly what it says on the label, but with a bit more nuance than a simple upscale.

Nvidia uses an AI model to sharpen edges, restore patterns and features, and clean up the blocky compression artifacts that make old uploads and low-bitrate videos look smeared or muddy. And let’s face it, they really are: some of those old 360p YouTube videos are practically unwatchable.



















Quiz
8 Questions · Test Your Knowledge

GPUs: Past to present and beyond
Trivia challenge

From early framebuffers to AI-powered rendering — how well do you really know the GPU?

HistoryHardwareAI & GPUsPioneersTechnology

Which company released the GeForce 256 in 1999, marketing it as the world’s first GPU?

That’s right! NVIDIA coined the term ‘GPU’ (Graphics Processing Unit) with the release of the GeForce 256 in 1999. It was notable for offloading transform and lighting calculations from the CPU, marking a major shift in how graphics were processed.

Not quite — it was NVIDIA that launched the GeForce 256 and trademarked the term ‘GPU’ in 1999. This card was a landmark release because it moved transform and lighting calculations off the CPU and onto dedicated graphics hardware for the first time.

Which company produced the Voodoo graphics card, one of the most iconic early 3D accelerators of the 1990s?

Correct! 3dfx Interactive’s Voodoo cards dominated 3D gaming in the mid-to-late 1990s. The original Voodoo released in 1996 was so popular that ‘Voodoo’ became synonymous with PC gaming performance for several years, before 3dfx was ultimately acquired by NVIDIA in 2000.

The correct answer is 3dfx Interactive. Their Voodoo line of 3D accelerator cards were the go-to choice for PC gamers in the mid-to-late 1990s. 3dfx was eventually acquired by NVIDIA in 2000, ending one of the most beloved names in early GPU history.

What does VRAM stand for, and what is its primary purpose in a GPU?

Exactly right! VRAM stands for Video Random Access Memory, and it serves as the GPU’s dedicated pool of memory for storing textures, frame buffers, and other rendering data. More VRAM generally allows higher resolutions and more detailed textures without performance penalties.

The correct answer is B. VRAM stands for Video Random Access Memory, and it’s the GPU’s own dedicated memory used to hold textures, frame buffers, and rendering data. Having more VRAM is especially important for gaming at high resolutions or with demanding texture packs.

What is ray tracing in the context of modern GPU rendering?

Spot on! Ray tracing simulates how light physically behaves by tracing the path of individual rays as they bounce around a scene. This produces highly realistic reflections, shadows, and global illumination, though it’s computationally expensive — which is why dedicated RT cores on modern GPUs are such a big deal.

Not quite — ray tracing is a rendering technique that simulates the physical behavior of light by tracing rays through a scene. It creates strikingly realistic reflections, shadows, and ambient lighting. Modern GPUs from NVIDIA and AMD include dedicated hardware cores specifically designed to accelerate ray tracing calculations.

What is NVIDIA’s DLSS technology, and what AI technique does it primarily rely on?

Correct! DLSS stands for Deep Learning Super Sampling. It renders frames at a lower internal resolution and then uses a trained AI model running on Tensor Cores to intelligently upscale the image, producing near-native quality visuals with significantly better frame rates.

The right answer is A. DLSS stands for Deep Learning Super Sampling. NVIDIA’s technology renders games at a lower resolution and uses AI — specifically trained neural networks running on dedicated Tensor Cores — to upscale the image. The result is better performance without a dramatic loss in visual quality.

In what decade did GPUs begin to be widely used for general-purpose computing tasks beyond graphics, a practice known as GPGPU?

That’s right! GPGPU (General-Purpose computing on Graphics Processing Units) became a serious field in the 2000s. NVIDIA’s launch of the CUDA platform in 2006 was a watershed moment, allowing developers to harness GPU parallelism for scientific computing, simulations, and eventually AI workloads.

The correct answer is the 2000s. While GPUs existed before then purely for graphics, the concept of using them for general computing — known as GPGPU — gained real traction in the mid-2000s. NVIDIA’s CUDA platform, released in 2006, was instrumental in opening up GPUs for tasks like scientific research, physics simulations, and AI training.

What interface do modern discrete GPUs primarily use to connect to a motherboard?

Correct! PCIe (Peripheral Component Interconnect Express) has been the standard interface for discrete GPUs since the mid-2000s, replacing AGP. Modern high-end GPUs typically use PCIe x16 slots, with PCIe 4.0 and 5.0 offering substantial bandwidth improvements over earlier generations.

The answer is PCIe (Peripheral Component Interconnect Express). AGP was the predecessor that was common through the early 2000s, but PCIe took over around 2004 and has been the dominant standard ever since. Today’s GPUs use PCIe 4.0 or 5.0 slots, which provide massive bandwidth to keep up with ever-growing GPU performance demands.

Which of the following best describes AMD’s answer to NVIDIA’s DLSS upscaling technology?

Well done! AMD’s FSR, or FidelityFX Super Resolution, is their upscaling technology designed to compete with NVIDIA’s DLSS. A key differentiator is that FSR is open-source and hardware-agnostic, meaning it works on GPUs from AMD, NVIDIA, and even Intel — making it far more accessible than DLSS, which requires NVIDIA Tensor Cores.

The correct answer is FSR (FidelityFX Super Resolution). AMD developed FSR as their competitive response to NVIDIA’s DLSS. One of its biggest advantages is that it’s open-source and not limited to AMD hardware — unlike DLSS, FSR can run on GPUs from NVIDIA and Intel too, making it a more universally accessible upscaling solution.

Challenge Complete

Your Score

/ 8

Thanks for playing!

But simply blowing up a rough low-res video to fit a higher resolution screen can make the flaws stand out even more, and while that definitely does happen with RTX Video at times, the tech is also smarter than that. Nvidia built artifact reduction into the feature, trying to minimize all the oddities that happen when you try to force a 360p video from 2009 to look decent in 2026. Overall, the feature works to improve input video from 360p to 1440p.

It also works in real time, upgrading each video as it happens. RTX Video leans on Nvidia’s Tensor cores to enhance playback as you watch.

I’m surprised that more people don’t know about this feature. It could be because it’s far from perfect, but based on my own testing, it’s worth checking out.

There are a few requirements to meet

Not everyone gets to benefit from RTX Video

The EVGA NVIDIA GeForce GTX 970 SSC GAMING ACX 2.0 graphics card sitting on a desk. Credit: Patrick Campanale / How-To Geek

First things first, this is an RTX-only feature. If you have an RTX 20, RTX 30, RTX 40, or RTX 50 GPU, you’re good to go. If you’re on an Nvidia GTX card, you’ll need to try your luck with Lossless Scaling instead, or buy a new GPU. (It’s about time—those good old GTXs are growing obsolete.) RTX Video is available on both desktops and laptops.

You’ll also need 64-bit Windows 10 or Windows 11, so Linux users are out of luck for this one.

RTX Video works in the latest versions of Chrome, Edge, Firefox, and VLC, and it may also work in other current Chromium-based browsers, although Nvidia has only explicitly verified Chrome and Edge. One annoying caveat is that if you use Edge, you have to disable Microsoft’s own Enhance videos feature first, or you may not be using Nvidia’s processing at all.

Lastly, the video itself needs to check some boxes, too. As I mentioned above, RTX Video is available in videos between 360p and 1440p, but it’ll only kick in if the video actually needs it. It won’t help if you’re already watching something at native resolution or lower. Some content may not work with RTX Video even though it seems like it should, and based on my testing, there’s little rhyme or reason to it. It just doesn’t work sometimes. (I’ve never seen it work in YouTube Shorts.)

Gigabyte's RTX 5070 GPU

Graphics RAM Size

12GB

Boost Clock Speed

2600MHz

If you want to take full advantage of DLSS 5 and other features like RTX Video, you may need to upgrade your GPU. The RTX 5070 is a cost-effective way to enjoy the full benefits of Nvidia’s RTX 50-series.


How to set it up and try it for yourself

A lot of guides get it wrong in 2026

A screenshot from Nvidia App Credit: Monica J. White / How-To Geek

A lot of guides will tell you to enable RTX Video through the Nvidia Control Panel, which was highly unintuitive. Fortunately, Nvidia’s updated the path, making it much simpler.

All you need is the Nvidia app. Download it, install it, and then head to System > Video. Toggle on Video Super Resolution.

It’ll show as inactive if you’re not currently tabbed into a video that’s playing an eligible video, so I recommend testing whether it works before you exit out of the app.

Between manually tweaking the quality and leaving it on Auto, it’s pretty much up to you. If you’re doing other GPU-intensive things at the same time, such as gaming, leaving it on Auto is sensible.

When this setting won’t save your videos

I played around with it a lot, and it’s not flawless

RTX Video is great, especially considering it’s some small, under-the-radar sort of feature that Nvidia doesn’t go super far out of its way to advertise. But it’s not a magic eraser for everything that looks wrong in low-res videos. Oftentimes, it’ll upscale parts of the image and not others, creating a weird mishmash of sharp visuals and blurry disappointment. It is what it is.


It doesn’t hurt to try

For all its faults, RTX Video is genuinely good when you want to watch an old video, show, or anime, and want to give it a little bit of an upscale. It’s a native alternative to Lossless Scaling, which can help too, but it costs money to buy.

My advice: try out RTX Video first and see the results with your own eyes. You might like them, or you might consider them negligible. Try it on a sample of at least 10 different videos on different platforms and cast your judgment.

Then, if you’re not happy, try out Lossless Scaling. It has a lot of uses beyond frame generation and can really help in situations such as these.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link