These 4 NAS mistakes are wasting your electricity (and your money)


A NAS is one of the devices you expect to be available 24/7, working around the clock to allow seamless file transfer and retrieval. The thing is, running non-stop means that even small differences in power consumption can add up on your electricity bill. The savings won’t be huge, but you can still shave more than a few bucks off your monthly costs by avoiding the following NAS mistakes.

Having too many drives in your NAS

Using fewer, higher-capacity HDDs is better

Higher-capacity hard drives typically use more power than smaller drives, both when idle and under load, largely because they contain more platters. However, the difference is relatively small, and in most cases, using fewer high-capacity drives results in lower overall power consumption.



















Quiz
8 Questions · Test Your Knowledge

Network Attached Storage (NAS)

From basement file servers to enterprise data vaults — test how much you really know about NAS technology.

HistoryHardwareUse CasesProtocolsSecurity

Which company is widely credited with introducing one of the first commercially successful NAS appliances in the early 1990s?

Correct! Auspex Systems released the NS3000 in 1989, widely regarded as one of the earliest dedicated NAS appliances. They pioneered the concept of a standalone file server accessible over a network, laying the groundwork for the modern NAS industry.

Not quite. The answer is Auspex Systems, which launched one of the first dedicated NAS appliances — the NS3000 — back in 1989. While companies like Synology and QNAP are household names today, Auspex was breaking new ground decades before them.

Which network file sharing protocol is primarily used by NAS devices to serve files to Windows-based clients?

Correct! SMB (Server Message Block) is the dominant protocol for file sharing with Windows clients. Originally developed by IBM and later popularized by Microsoft, SMB is what allows Windows machines to seamlessly browse and access NAS shares as if they were local drives.

Not quite. The answer is SMB (Server Message Block). NFS is the protocol of choice for Linux and Unix clients, iSCSI is used for block-level storage, and FTP is a general file transfer protocol not optimized for seamless file system integration.

What does the RAID level ‘5’ specifically require as a minimum number of drives to function?

Correct! RAID 5 requires a minimum of three drives. It stripes data and parity information across all drives, meaning it can tolerate the failure of one drive without any data loss — making it a popular choice for NAS users who want a balance of performance, capacity, and redundancy.

Not quite. RAID 5 requires a minimum of three drives. The parity data distributed across all drives allows one drive to fail without losing data. RAID 1 only needs two drives, while RAID 6 requires four — so options vary depending on your redundancy needs.

What is ‘media server’ functionality on a NAS most commonly used for in a home environment?

Correct! Media server functionality — often powered by software like Plex, Emby, or Jellyfin running on the NAS — allows you to stream your locally stored media collection to TVs, phones, tablets, and more. It essentially turns your NAS into a personal Netflix for your own content library.

Not quite. The core use of a NAS media server is streaming locally stored movies, music, and photos to other devices on your network. Software like Plex or Jellyfin handles the heavy lifting, including transcoding video on the fly for devices that need it.

What is the ‘3-2-1 backup rule’ that NAS users are often advised to follow?

Correct! The 3-2-1 rule means: keep 3 total copies of your data, store them on 2 different types of media (e.g., NAS and external drive), and keep 1 copy in an offsite or cloud location. This strategy protects against hardware failure, theft, fire, and other disasters that could wipe out local backups.

Not quite. The 3-2-1 rule stands for: 3 copies of your data, stored on 2 different media types, with 1 copy kept offsite. It’s a best-practice framework designed to ensure your data survives almost any disaster scenario, from a failed hard drive to a house fire.

Which protocol allows a NAS to present storage to a computer as if it were a locally attached block device, rather than a file share?

Correct! iSCSI (Internet Small Computer Systems Interface) transmits SCSI commands over IP networks, allowing a NAS to present raw block storage to a host computer. The computer then formats and manages that storage like a local disk — making iSCSI ideal for virtual machines and databases that need low-level disk access.

Not quite. The answer is iSCSI. Unlike SMB or NFS, which share files over a network, iSCSI exposes raw block storage — the host computer sees a NAS volume as though it were a physically attached hard drive, which is critical for workloads like virtual machine datastores.

Which of the following best describes a ‘surveillance station’ use case for a NAS?

Correct! Many NAS brands — including Synology and QNAP — offer dedicated surveillance station software that turns the NAS into a Network Video Recorder (NVR). It can connect to multiple IP cameras, record footage continuously or on motion detection, and store months of video locally without a subscription fee.

Not quite. A surveillance station on a NAS refers to software that connects to IP security cameras, records video footage, and stores it locally. This makes a NAS a powerful and cost-effective alternative to cloud-based security systems, since you own and control all your recorded footage.

Synology, one of the most recognized NAS brands today, was founded in which year and country?

Correct! Synology was founded in Taiwan in 2000 and has grown into one of the most beloved NAS manufacturers in the world. Their DiskStation Manager (DSM) operating system is frequently praised for its polished interface and rich feature set, making Synology a top choice for both home users and businesses.

Not quite. Synology was founded in Taiwan in 2000. Taiwan has become a major hub for NAS hardware development, with competitors like QNAP also headquartered there. Synology’s DiskStation Manager software helped set the standard for what a user-friendly NAS experience could look like.

Challenge Complete

Your Score

/ 8

Thanks for playing!

For instance, a WD Red Pro 12TB HDD draws between 6W and 8.8W when active (depending on the variant) and about 2.8W when idle, while a 24TB model draws a similar amount of power under load but around 3.6W when idle. So if you replace two 12TB drives with a single 24TB drive, you can cut active power consumption in half, with idle consumption dropping from about 5.6W to 3.6W. The savings aren’t huge, but they can add up in NAS systems with many drive bays.

A HGST 12TB Helium recertified hard drive.


Please stop putting desktop hard drives in your NAS

Don’t start your NAS journey off on the wrong foot.

Ignoring power-saving settings

You can save a decent chunk of power with just a few tweaks

The Windows 11 Battery Saver icon

Many first-time NAS users simply leave their device running 24/7, but in reality, you may not need it working around the clock. The good news is that most NAS models support scheduled shutdowns and startups. For example, you can set the device to power off overnight and turn back on in the morning, which can save you a nice chunk of change over time. That said, this isn’t viable for every setup. If your NAS handles tasks beyond storage (e.g., as a media server or self-hosting machine), it’s often better to keep it running continuously.

Wake-on-LAN (WoL) is another handy feature supported by many NAS devices. It allows the system to remain powered off or in a very low-power state and then be turned on remotely over the network when needed. WoL is a solid alternative to scheduled shutdowns.

TERRAMASTER F2-425 2-Bay NAS Storage

CPU

Intel x86 Quad-Core CPU

Memory

4GB


Some NAS operating systems also offer predefined power-saving profiles that can reduce energy usage without manual tuning. Instead of configuring everything yourself, you can select a power-saving mode and let the OS do the work.

If you’ve repurposed an old PC as a NAS, it’s also worth checking the BIOS for power-saving options. On AMD systems, you can reduce power consumption by undervolting the CPU or enabling ECO Mode, which lowers the CPU’s power limit (TDP).

If the CPU in question is Intel, you should make sure MCE (Multicore Enhancement) is disabled, as it allows all CPU cores to run at their maximum boost clock simultaneously, which consumes more power. You can also fine-tune power limits (PL1/PL2) to achieve similar results.

Never spinning drives down

Give your storage drives some rest

Mechanical hard drives with the covers removed and disks exposed. Credit: kckate16/Shutterstock.com

HDD hibernation, also known as drive spindown, is a somewhat controversial topic in the NAS community. While enabling it can lower power usage and reduce noise and heat, it may also introduce additional wear. Spinning drives up after an idle period also causes a brief surge in power draw as the platters accelerate from 0RPM to operating speed.

Hard drives are rated for a limited number of start/stop (load/unload or spin-up/down) cycles, since spinning a drive down and back up puts stress on the motor and related components. On the other hand, keeping drives running 24/7 also contributes to gradual wear. At the end of the day, HDDs will age either way, no matter whether you hibernate them or not.

Using an aggressive spin-down timer (say, 10 minutes) can be detrimental to your NAS drives, as it may lead to frequent cycling and increased wear. On the other hand, you shouldn’t keep your drives running nonstop either. Setting them to spin down after, say, one hour of inactivity means they’ll cycle only a few times per day during idle periods while also consuming less power. Given that the average NAS HDD is rated for tens of thousands of start/stop cycles (often ~50,000 or more), this approach is generally well within safe limits for long-term use.

Using HDD hibernation can also be an effective alternative to scheduled power-offs, allowing your NAS to remain accessible 24/7 while still reducing power usage during idle periods.

Using a power-guzzling CPU

There are plenty of efficient CPUs out there

An graphic rendering of an Intel N150 CPU. Credit: Intel

Modern CPUs can be impressively efficient while still packing a decent punch. Intel’s N-series chips, for example, sip power when idle and typically draw between 10W and 35W under load, which is far lower than a typical desktop CPU. Prebuilt NAS devices are usually powered by low-power x86 or ARM processors, but if you’re building your own NAS, it’s easy to end up with a power-hungry system.

Now, if you’ve repurposed an old PC as a NAS, there’s no reason to worry about CPU power consumption because the cost of replacing the CPU and (perhaps) the motherboard will likely outweigh any realistic savings on your electricity bill from using a more efficient CPU. However, if you’re building a NAS from scratch, it makes sense to choose a low-power CPU if you care about power efficiency. You can also undervolt the CPU or enable power-saving BIOS settings, which can further reduce power draw.


A power-efficient NAS won’t only save you cash

Optimizing your NAS for lower power usage won’t only reduce your electricity bill, but it will also help it run cooler and quieter. Drives will generate less heat overall, and the CPU won’t waste energy boosting unnecessarily for lightweight tasks.

After you optimize your NAS to use less power you should learn Docker, which can open up a wide range of additional use cases and let you use it for a ton of cool stuff beyond its primary role.

51Zf-5oEWdL._AC_SL1500_

7/10

CPU

8-core

Memory

4GB LPDDR4X RAM

This unified storage hub supports massive capacity up to 60TB. Unlike cloud storage with recurring monthly fees, a UGREEN NAS enclosure requires only a one-time purchase for long-term use. Equipped with a high-performance processor, 1GbE port, and 4GB LPDDR4X RAM, this NAS handles multiple tasks with ease.




Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link