Hot-swappable hard drives seemed brilliant in the ’90s—here’s why Iomega Jaz never stood a chance


Computer storage has always been a problem. There never seems to be enough of it. Just as we get bigger disks and drives, file sizes swell to accommodate them. So you can imagine in the ’90s what a chore it was to move large media files or make backups when all you have are 1.44MB floppy disks.

Even Iomega’s 100MB Zip drives aren’t enough when you take professional jobs into account. They not only need much more storage space, but also more speed. The company had an answer for this though: the Jaz drive.

The dream: Removable storage that behaved like a hard drive

It seemed like the ultimate solution

The only storage technology in the ’90s that really ticked all the boxes for the use case the Jaz drive needed to fill was the humble hard drive. Hard drives were fast and could store multiple gigabytes of data at that point.

So, why not create a removable hard drive? To be clear, external hard drive technology already existed, but hard drives were expensive. The Jaz drive is effectively a hard drive with no platters, and the Jaz cartridge is a set of hard drive platters with no hard drive.



















Quiz
8 Questions · Test Your Knowledge

Storage Through the Ages

From ancient clay tablets to modern SSDs — how much do you really know about the wild history and quirky facts of data storage?

HistoryHardwareCapacityOdditiesModern Tech

What was the storage capacity of the very first commercially sold hard disk drive, IBM’s 350 RAMAC introduced in 1956?

Correct! The IBM 350 RAMAC stored a whopping 5 megabytes — and weighed over a ton. It was the size of two refrigerators and leased for around $3,200 per month, which is roughly $35,000 in today’s money.

Not quite. The IBM 350 RAMAC, launched in 1956, stored just 5 megabytes of data. Despite that tiny capacity by modern standards, it was a revolutionary machine that filled an entire room and cost thousands per month to lease.

Which of these has genuinely been used as a data storage medium by researchers and engineers?

Correct! DNA storage is a real and rapidly advancing field. Researchers have successfully encoded entire books, images, and even operating systems into synthetic DNA strands, which can theoretically store 215 petabytes per gram of material.

Not quite. The answer is DNA molecules. Scientists have encoded movies, books, and even malware into synthetic DNA strands. DNA storage is extraordinarily dense — theoretically capable of holding 215 petabytes per gram — making it one of the most promising future storage technologies.

What does the ‘SSD’ in SSD storage stand for?

Correct! SSD stands for Solid State Drive. The ‘solid state’ refers to the fact that it uses solid-state electronics — NAND flash memory chips — with no moving mechanical parts, unlike traditional spinning hard disk drives.

Not quite. SSD stands for Solid State Drive. The term ‘solid state’ comes from electronics jargon meaning the device uses semiconductor components rather than moving mechanical parts, which is why SSDs are faster, quieter, and more durable than HDDs.

Approximately how many standard 1.44 MB floppy disks would you need to match the storage of a single modern 1 terabyte hard drive?

Correct! One terabyte equals roughly 1,048,576 megabytes, and dividing by 1.44 MB per floppy gives you about 728,000 disks. Stacked, that pile would be taller than most skyscrapers — a humbling reminder of how far storage has come.

Not quite. You’d need approximately 700,000 floppy disks to match a single 1 TB drive. That stack of disks would reach over a mile high if laid flat, which is a staggering way to visualize the enormous leap in storage density over just a few decades.

What storage medium did NASA use to store data from the original Apollo moon missions in the 1960s and 1970s?

Correct! NASA relied heavily on magnetic tape reels during the Apollo era. In fact, thousands of original Apollo-era data tapes were eventually lost or accidentally erased and reused, leading to a massive archival effort years later to recover what footage remained.

Not quite. NASA used magnetic tape reels to store Apollo mission data. Tragically, many of these original tapes were later lost or even deliberately erased and reused due to tape shortages, which is why some original high-quality Apollo footage is gone forever.

What is the name of the technique used in modern NAND flash storage that stores multiple bits per cell to increase density?

Correct! QLC, or Quad-Level Cell, stores 4 bits per cell and is used in high-capacity, budget-friendly SSDs. While it offers great density and lower cost, QLC NAND typically has lower endurance and slower write speeds compared to TLC (3-bit) or MLC (2-bit) designs.

Not quite. QLC stands for Quad-Level Cell, and it’s a real NAND flash technology that stores four bits per cell. It allows for very high storage densities at lower cost, but trades off endurance and write performance compared to older, less dense cell types like MLC or SLC.

The Svalbard Global Seed Vault in Norway stores seeds for agricultural preservation — but what famous tech company also operates a nearby ‘Arctic Code Vault’ to preserve software?

Correct! GitHub operates the Arctic Code Vault in Svalbard, Norway, where they stored a snapshot of all active public repositories on film designed to last 1,000 years. The project is part of GitHub’s Arctic Vault Program to preserve open-source software for future generations.

Not quite. It’s GitHub — owned by Microsoft — that runs the Arctic Code Vault. In February 2020, they photographed every active public repository onto special archival film and stored it deep within a decommissioned coal mine in Svalbard, designed to last a thousand years.

What was the primary reason early floppy disks were called ‘floppy’?

Correct! Early floppy disks — especially the original 8-inch variety from IBM in 1971 — used a thin, genuinely flexible magnetic disk inside a soft protective sleeve. You could literally flop the thing around. Later 3.5-inch versions came in rigid plastic cases, but kept the ‘floppy’ name.

Not quite. The name ‘floppy’ came from the physical flexibility of the magnetic disk inside the sleeve. The original 8-inch IBM floppy disks introduced in 1971 had a noticeably limp, floppy disk that you could bend. Even the rigid-cased 3.5-inch disks that followed kept the iconic nickname.

Challenge Complete

Your Score

/ 8

Thanks for playing!

When you insert the cartridge into the drive, a motor engages the hub of the platters, and read-write heads enter the cartridge through a hole covered by a retracting metal cover. Jaz cartridges are remarkably small, with a slightly larger footprint than a 3.5-inch floppy, while being as thick as around three of them.

You get speeds more like an internal hard drive, but without the cost of multiple expensive hard drives with motors, mechanical parts, and electronics.

UGREEN NASync DSP2800 thumbnail

Brand

UGREEN

CPU

Intel 12th Gen N-Series

This cutting-edge network-attached storage device transforms how you store and access data via smartphones, laptops, tablets, and TVs anywhere with network access.


Why hot-swappable storage sounded like the future

People were hot-to-swap

The first Jaz disks offered 1GB of storage, and later we would get 2GB disks. That doesn’t sound like a lot today, but you have to consider this in context. At the start of the ’90s, a typical home PC might have a 40MB hard drive, which is exactly what our family 80286 PC had back then.

Over the course of the ’90s, HDD sizes grew rapidly, and by the mid- to late ’90s, when the Jaz drive was relevant, a typical household PC might have 500MB to 2GB of storage space. If you were rich or your employer was footing the bill, you might have 10GB of hard drive space, with 20GB being the pinnacle of consumer storage capacity by the end of the ’90s.

So a 2GB removable disk was a big deal. You could back up the entire typical computer system disk to a single Jaz disk and still have room left. Video editors, programmers, database engineers, there was no end to the list of people for whom a 1-2GB removable disk was a godsend.

The fatal flaws hiding under the hood

Put together with that classic ’90s engineering

On paper, it looked like the Jaz was set up for success, and it didn’t exactly fail. Jaz drives sold well enough that Iomega supported them into the early 2000s. However, there were some issues with the technology.

First, there’s a reason that hard drives are carefully sealed. Dust and hair getting into the platters was an issue, and the best Iomega could do was offer a plastic protective dust cover and hope that people would use it. Jaz drives also developed a bit of a reputation for being untrustworthy. Like the Zip drive, the Jaz drive had its own version of the “click of death” though in this case it related to a jammed drive that would not eject. That undermined using them for critical backups.

The cartridges weren’t as expensive as entire hard drives, but they were expensive, as were the drives. So they really only made financial sense for very specific users where they’d need lots of disks which would work out cheaper than heaps of external drives.

The final significant issue was the reliance on SCSI interfaces for both internal and external Jaz drives. Most home PC users did not have SCSI cards, this was a professional interface for high-end machines. Mainstream external drive solutions used the serial or parallel ports, and internal drives used an IDE connection.

Relying on a niche connection standard limited how many people could even use a Jaz drive. By the end of the ’90s, USB had established itself as the new standard for peripheral connection. I do wonder if a USB 2.0 Jaz drive could have made a difference, but Iomega did sell a USB adapter (along with FireWire and parallel port options) and that didn’t seem to make much of a difference.

wd elements desktop external hard drive

Storage Capacity

16TB

Brand

Western Digital

The WD Elements Desktop External Hard Drive is great for your storage needs. It comes in sizes up to 24TB and supports USB 3.2 Gen 1 speeds for data transfer.


The market shifts that killed the Jaz dream

It’s hard to separate the eventual downfall of the Jaz drive from Iomega’s overall woes, but I think the big problem was that the Jaz just didn’t keep up with the rest of storage technology. People quickly gained access to much bigger hard drives, so the need for Jaz drives as hot-swappable storage was reduced, and writable CDs (although much smaller in capacity) were dirt cheap on a per-megabyte basis and perfectly good as a medium-term backup medium.


We still use external spinning platters today

I think perhaps the rise of complete, sealed, and reliable external hard drives made it unnecessary for something like a Jaz drive to exist. Combined with the widespread adoption of USB, this quirky hard drive-based storage tech is now nothing more than a brief footnote in computer history.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link