Western Digital 20 Terabyte Hard Drives Will Power Zettabyte Age

Western Digital (NASDAQ: WDC) announced its nine-disk mechanical platform, which includes energy-assisted recording technology and maintains the company’s areal density leadership while delivering the highest capacity available. The company will sample the 18TB Ultrastar DC HC550 CMR HDD and the 20TB Ultrastar DC HC650 SMR HDD to select customers by the end of 2019 with production ramp expected in the first half of 2020.

This rapid ramp and availability of the 20TB SMR drive following a technology preview in June 2019, supports a growing ecosystem and the continued industry adoption of SMR. Western Digital estimates that 50 percent of its HDD exabytes shipped will be on SMR by 2023.

Western Digital will offer a unique and full portfolio of Capacity Enterprise HDDs, with cost-optimized configurations for every important capacity point: six-disk 10TB Ultrastar DC HC330 air-based HDD; eight-disk 14TB Ultrastar DC HC530 helium-based HDD; nine-disk 18TB Ultrastar DC HC550 helium-based HDD; and a nine-disk 20TB Ultrastar DC HC650 helium-based HDD. The company’s strong execution has resulted in a rapid ramp and majority share at its capacity point for the Ultrastar DC HC530 HDD3, the industry’s only available eight-disk 14TB CMR drive. According to TRENDFOCUS, 14TB will continue to be the industry’s dominant capacity point through the first half of 2020.

How Helium Technology Boosts Capacity

Sealed, helium-filled drives represent one of the most significant storage technology advancements in decades. In 2013, Western Digital
introduced HelioSeal® technology, a foundational building block for high-capacity hard disk drives (HDDs). This technology innovation hermetically seals the HDD with helium, which is one-seventh the density of air. The less-dense atmosphere enables thinner disks and more of them for higher capacities in the same industry-standard form factor. Less air friction means less power required to spin the disks and less air turbulence for higher reliability. Helium drives expand the boundaries of conventional high-capacity HDDs, allowing for dramatic increases in efficiency, reliability, and value. HelioSeal technology delivers today’s lowest total cost of ownership (TCO) for hyperscale and data-centric applications.

According to industry analysts, petabytes stored in the Capacity Enterprise Hard Drive Segment are increasing at a compound annual growth rate of 40%. This means overall memory capacity is doubling every two years.

Existing 14TB helium SMR drives consume just 5.2 watts during idle operation, a 60% reduction in watts-per-TB compared to 8TB Ultrastar air-based HDDs.

Lower cooling requirements with drives that typically run 4˚ to 5˚C cooler to lower power and cooling costs.

Cooler operation also results in better reliability and enables systems with higher storage densities.

Quieter operation and up to 50% lower weight-per TB improve environmental conditions in high-density deployments and enable more storage capacity where building codes enforce floor loading limits.

Shingled Magnetic Recording Boost Capacity

Shingled magnetic recording (SMR) technology complements Helium technology by providing an additional 16% increase in areal density
compared to same-generation drives using conventional magnetic recording (CMR) technology. Physically, this is done by writing the data
sequentially, then overlapping (or “shingling”) it with another track of data.

SOURCES- Western Digital
Written By Brian Wang, Nextbigfuture.com

23 thoughts on “Western Digital 20 Terabyte Hard Drives Will Power Zettabyte Age”

  1. That does raise the issue of: why are the connectors for bike (& car) tires so poorly designed.
    I was thinking of bringing the hard disk into a shop every few years.
    The difficulty of keeping the helium from leaking out the refill connection between refills is an issue I will concede.

  2. It’s hard enough sealing the things as it is. Making another hole (the refill valve) is asking for trouble. Remember the last time you tried to fill a bike tire with air, and it kept leaking air until you finally got the pump connector on right? Imagine regular people doing the same thing for a SMR disk. If you are that desparate to recover an old SMR disk, best to give it to a data recovery expert, who would likely just crack the disk open in a helium filled chamber and do a final long read off the drive.

  3. I knew SMR was bad, but I didn’t think they were write only a few times bad. That would make them only marginally different from something like a blueray-RW disc.

    Toshiba I believe has 16TB 2.5inch and 60TB 3.5 inch enterprise drives, but those cost an arm and a leg.

  4. 20TB isn’t all that impressive. Nimbus Data is currently shipping 100TB SSD drives in 3.5″ form and 35TB in 2.5″.

  5. NVMe is just an interface vs. something like 12G SAS. It’s not a power consumption thing.

    Aside from enterprise arrays and backend NAS for all the major clouds, I use NVMe for my personal laptop… an ASUS Zenbook pro, for about 2 years now. No problems with it.

  6. I did figure that for enterprise cost would be swallowed in favor of the other things.

    I was under the impression that NVMe products could get rather toasty?

    I have a SATA SSD, but I’ve been holding off NVMe until I’m sure it won’t just die on me due to overheating.

  7. Yes they do seem to lag behind actually releasing them after announcing, probably to do with generational scale up issues for VNAND.

    The scale up for the 136 layer gen should be faster considering it has less process steps.

  8. I work for a $12 billion storage vendor. All the designs I’ve done in the past few years are SSD/NVMe. Power, space, and cooling are primary factors, cost of device is a bonus.

  9. When a new device comes out, storage vendors sample it and qualify it in their products. Sometimes current generation and sometimes next generation. The process of qualifying a new SSD is amazingly complex and can take 1-2. Here on NBF we see stuff that is research or grad papers or occasionally preannounced and rarely something you can order today. My example is something you can order today.

  10. It’s not improving in cost fast enough, it hit a snag due to PMR limits and HAMR/MAMR tech difficulties, now SSD’s have completely shredded them on density per volume and speed, if not cost.

    This 20TB announcement should have been 3-4 year ago, at a much lower price tag, but HAMR just wouldn’t play ball.

  11. Thats old, the last press release I saw mentioned a 128 TB SSD based on their 90-96 layer VNAND generation.

    That’s not even mentioning the new 136 layer VNAND they just announced at the Flash Memory Summit in August.

    Edit: That 128 TB SSD was still 2.5 inch by the way….

  12. Whatever happened to CMR meaning Colossal Magnetoresistance?

    This was supposed to be heralded as the next step beyond the GMR (Giant Magnetoresistance) tech that previously massively increased magnetic field sensitiivty.

    I guess like room tech superconductors it fell into the ‘another 10 years’ problem.

  13. SMR (Shingled Magnetic Recording) drives are meant for archive or read mostly type applications. By the time you write their capacity a few times, the odds given by the MTBF are that they will fail. In a typical array, a failure means a rebuild and often in practice a second one fails during the rebuild process… hence triple parity and ECC coding schemes.

    SSDs do need to be powered on at least occasionally, but in modern arrays they’re on all the time anyway. It’s not a problem unique to SSD. For magnetic, the industry has done RAID scrubs for decades. Back during the MAID craze, where you powered off spinning rust when not used, the lubricant would go bad after a while causing failure as well. Spinning rust is generally on all the time these days as well. The power draw difference between spinning rust and SSD is an order of magnitude more at least, so if you’re operating the gear power consumption, cooling, etc, all factors in SSD’s favor in the data center. SSD is still much more expensive than spinning rust, so you generally see it in tier 1-2 applications. Either way, 33TB SSD/NMVe has been out for a while and now 18TB SMR. The cost for SMR is lower than for SSD, but you need to factor in power and cooling into the equation as well. Both give decent density, but I’d posit that they are targeted to different workloads.

  14. SMR, which means it will suck for IO, and helium drive, which means it has a finite lifespan from date of manufacture due to helium loss (as these drives are not designed for helium refills). So your bottom tier storage that still needs to be freely rewritable. Otherwise, using WORM media like those blueray jukebox appliances for nearline storage, or classic LTO tape for offline are still winners in their respective categories.

    Those helium drives are causing people to hoard non-helium disks now, where they don’t want to use tape, and are afraid of lifetimes on writable blueray media, and SSD’s losing data if they aren’t powered on regularly.

  15. I’d take issue with the assumption.

    Short stroking, adding more and more smaller spindles in order to achieve higher performance, peaked over a decade ago. The industry direction first turned to SSD via SAS, and then SSD via NVMe. With the current generation of storage arrays from multiple vendors, the time through the TOR (for iSCSI/NAS) or FC switch is greater (~160us) that the time it takes the array to serve up the IO (<100us). Throw in some distance or a couple of switch hops (TOR/MDE) and that 100us IO served up turns into .75ms Shortening the distance is a problem that higher densities solve (keep it in the same rack as the compute), albeit on much more performant platforms (SSD/NVMe) than you can ever achieve with spinning rust. In 2U or 4U you can get effective logical densities of up to 10PB given modern inline dedupe and compression with all the HA and triple parity or ECC protection you’d expect from an enterprise grade storage product.

    High density spinning rust generally has a cost advantage at the expense of latency, not to be confused with throughput. In many object stores, public or private, performance is a measure of time to first byte and throughput rather than latency and IOPS. As tier 3-5 storage, spinning rust is still useful due to cost advantages… for now.

  16. This reads like the opening paragraph of a PC Gamer How-to Guide on PC building c. 1998. Does anyone really still not know that seek time is a thing?

  17. It ain’t the capacity of your drive that matters, its the IO ability. We still use 1G Hard drives on our mainframe to maintain its subsecond transaction performance.

    High capacity HD are only good for storing movies and pictures. If you want data processing performance go with SSD and the more the better. It is the cross sectional area of the pipe that’s matters not the capacity of the tank.

Comments are closed.