cross-posted from: https://beehaw.org/post/24650125

Because nothing says “fun” quite like having to restore a RAID that just saw 140TB fail.

Western Digital this week outlined its near-term and mid-term plans to increase hard drive capacities to around 60TB and beyond with optimizations that significantly increase HDD performance for the AI and cloud era. In addition, the company outlined its longer-term vision for hard disk drives’ evolution that includes a new laser technology for heat-assisted magnetic recording (HAMR), new platters with higher areal density, and HDD assemblies with up to 14 platters. As a result, WD will be able to offer drives beyond 140 TB in the 2030s.

Western Digital plans to volume produce its inaugural commercial hard drives featuring HAMR technology next year, with capacities rising from 40TB (CMR) or 44TB (SMR) in late 2026, with production ramping in 2027. These drives will use the company’s proven 11-platter platform with high-density media as well as HAMR heads with edge-emitting lasers that heat iron-platinum alloy (FePt) on top of platters to its Curie temperature — the point at which its magnetic properties change — and reducing its magnetic coercivity before writing data.

  • MonkeMischief@lemmy.today
    link
    fedilink
    English
    arrow-up
    25
    ·
    7 hours ago

    Okay cool, cool, so does this mean ridiculous data centers will use these things, and then can I get another 4TB RED for my NAS so I can fit my whole life on a mirrored total of 8TB without paying 8x what it’s worth, please?

    Thaaaaanks…

  • Shady_Shiroe@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    12 hours ago

    I just hope smaller sized drives become cheaper. The word “hope” is doing a lot of heavy lifting here.

  • FirmDistribution@lemmy.world
    link
    fedilink
    English
    arrow-up
    97
    arrow-down
    1
    ·
    edit-2
    16 hours ago

    with optimizations that significantly increase HDD performance for the AI and cloud era

    Can somebody do anything with a normal consumer in mind these days? 😭

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 hours ago

      Normal consumers can install jellyfin. At some point they’ll make downloading a crime, they wouldn’t hurt people to have a decent collection of stuff ready for that day.

    • mycodesucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      12 hours ago

      No, and it’s by design.

      You’re gonna lease a tablet and use cloud-based storage services and like it.

      The dystopia is here.

      • Kushan@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 hours ago

        It’s about the storage I have in my server right now - using 15 drives ☠️

    • dual_sport_dork 🐧🗡️@lemmy.world
      link
      fedilink
      English
      arrow-up
      63
      ·
      16 hours ago

      Not until somebody shuts off the investor money faucet for AI. Then they’ll come crawling back — although inevitably not until after they go whining to all the world’s governments about wanting a bailout.

      But hey, look at the bright side. We’ve already had the cryptocurrency mining boom and bust, and “AI” boom and soon to be bust. There’s still time for some idiot to invent the next tech scam fad which will conveniently require a shitload of hardware for no recognizably useful purpose.

      • cecilkorik@piefed.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 hours ago

        Then they’ll come crawling back — although inevitably not until after they go whining to all the world’s governments about wanting a bailout.

        And don’t forget the part where, whether they get a bailout or not, they’ll still have to double the prices of everything to make up for all the money they lost on that stupid AI bubble exploding in their face (which all of us are somehow to blame for, obviously, which is why we have to pay them back for it)

    • Dremor@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 hours ago

      My Z2 had à drive failure recently, with 4To drives. Took me almost 3 days to re-silver the array 😅. fortunately had a hot spare setup, so it started as soon as it failed, but now a second drive is showing signs of failing soon, so I had to pay the AI tax (168€) to get one asap (arriving Monday), as well as a second one, cheaper (around 120€), but which won’t arrive until the end of April.

      • SmoothLiquidation@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        12 hours ago

        When you are running a server just to store files (a NAS) you generally set it up so multiple physical hard disks are joined together into an array so if one fails, none of the data is lost. You can replace a failed drive by taking it out and putting in a new working drive and then the system has to copy all of the data over from the other drives. This process can take many hours to run even with the 10-20 TB drive you get today, so doing the same thing with 140 TB drive would take days.

    • Grapho@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      And if it breaks at 10 months and they take another 2 to send your replacement back, well, they no longer need to send one that actually works this time either

  • zorflieg@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    13 hours ago

    I wonder why current consumer HDD’s don’t have NVME connectors on them. Like I know speeding up the bus isn’t going to make the spinning rust access faster but the cache ram would probably benefit from not being capped at 550MBps

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    15 hours ago

    This would be a bitch to have to rebuild in a raid array. At some point a drive can get TOO big. And this is looking to cross that line.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      15 hours ago

      At some point a drive can get TOO big

      I was thinking the same. I would hate to toast a 140 TB drive. I think I’d just sit right down and cry. I’ll stick with my 10 TB drives.

      • rtxn@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        15 hours ago

        This is not meant for human beings. A creature that needs over 140 TB of storage in a single device can definitely afford to run them in some distributed redundancy scheme with hot swaps and just shred failed units. We know they’re not worried about being wasteful.

        • MonkeMischief@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 hours ago

          This is not meant for human beings.

          This is for like, Smaug but if he hoarded classic anime and the entirety of Steam or something. Lol

        • thejml@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          14 hours ago

          Rebuild time is the big problem with this in a RAID Array. The interface is too slow and you risk losing more drives in the array before the rebuild completes.

          • rtxn@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            14 hours ago

            Realistically, is that a factor for a Microsoft-sized company, though? I’d be shocked if they only had a single layer of redundancy. Whatever they store is probably replicated between high-availability hosts and datacenters several times, to the point where losing an entire RAID array (or whatever media redundancy scheme they use) is just a small inconvenience.

            • enumerator4829@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 hours ago

              Fairly significant factor when building really large systems. If we do the math, there ends up being some relationships between

              • disk speed
              • targets for ”resilver” time / risk acceptance
              • disk size
              • failure domain size (how many drives do you have per server)
              • network speed

              Basically, for a given risk acceptance and total system size there is usually a sweet spot for disk sizes.

              Say you want 16TB of usable space, and you want to be able to lose 2 drives from your array (fairly common requirement in small systems), then these are some options:

              • 3x16TB triple mirror
              • 4x8TB Raid6/RaidZ2
              • 6x4TB Raid6/RaidZ2

              The more drives you have, the better recovery speed you get and the less usable space you lose to replication. You also get more usable performance with more drives. Additionally, smaller drives are usually cheaper per TB (down to a limit).

              This means that 140TB drives become interesting if you are building large storage systems (probably at least a few PB), with low performance requirements (archives), but there we already have tape robots dominating.

              The other interesting use case is huge systems, large number of petabytes, up into exabytes. More modern schemes for redundancy and caching mitigate some of the issues described above, but they are usually onlu relevant when building really large systems.

              tl;dr: arrays of 6-8 drives at 4-12TB is probably the sweet spot for most data hoarders.

            • thejml@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              13 hours ago

              True, but that’s going to really be pushing your network links just to recover. Realistically, something like ZFS or a RAID-6 with extra hot spares would help reduce the risks, but it’s still a non trivial amount of time. Not to mention the impact to normal usage during that time period.

              • frongt@lemmy.zip
                link
                fedilink
                English
                arrow-up
                3
                ·
                10 hours ago

                Network? Nah, the bottleneck is always going to be the drive itself. Storage networks might pass absurd numbers of Gbps, but ideally you’d be resilvering from a drive on the same backplane, and SAS-4 tops out at 24 Gbps, but there’s no way you’re going to hit that write speed on a single drive. The fastest retail drives don’t do more than ~2 Gbps. Even the Seagate Mach.2 only does around twice that due to having two head actuators.

    • non_burglar@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      15 hours ago

      It doesn’t really matter, the current limitations are not so much data density at rest, but getting the data in and out at a useful speed. We breached the capacity barrier long ago with disk arrays.

      SATA will no longer be improved, we now need u.2 designs for data transport that are designed for storage. This exists, but needs to filter down through industrial application to get to us plebs.

    • pHr34kY@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      12 hours ago

      I don’t get how a single person would have that much data. I fit my whole life from the first shot I took on a digital camera in 2001… Onto a 4TB drive.

      …and even then, two thirds of it is just pirated movies.

      • billwashere@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        13 hours ago

        Amateur 😀

        But seriously I probably have close to 100 TB of music, TV shows, movies, books, audiobooks, pictures, 3d models, magazines, etc.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        13 hours ago

        I need a home for my orphaned podman containers /s

        I think this is better targeted to small and medium businesses.

        if you run this as a NAS you could easily have all your budd s obsesses files in one place without needing complex networking.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    47 minutes ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    RAID Redundant Array of Independent Disks for mass storage
    SATA Serial AT Attachment interface for mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    4 acronyms in this thread; the most compressed thread commented on today has 14 acronyms.

    [Thread #72 for this comm, first seen 8th Feb 2026, 00:30] [FAQ] [Full list] [Contact] [Source code]

  • iturnedintoanewt@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    12 hours ago

    Doesn’t this sound awfully similar to the Mini disc technology? The discs were only writable when heated by a laser. They were pretty impressive for the time… But not very fast. Especially when writing.

  • solrize@lemmy.ml
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    16 hours ago

    As a result, will be able to offer drives beyond 140 TB in the 2030s.

    Um thanks but tell us about 2026?

    • Korkki@lemmy.ml
      cake
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      Also what current consumer level application could require of storage 140TB. That would be some advanced level data hoarding or smth.

      • Andres@social.ridetrans.it
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        11 hours ago

        @Korkki @just_another_person I see 4k HDR blue ray movie rips these days on the order of 50GB (edit: eg, Eddington.2025.MULTi.VFF.2160p.DV.HDR.BluRay.REMUX.HEVC-[BATGirl]: 77.73G).

        Which is too rich for my blood (I’m still watching on 1080p screens over here), but for someone with the right kind of home theater… that’s only ~280 movies on a 14TB drive. Lots of movie collections, even in the olden days of physical VHS and DVDs, span 1,000+ movies.

        • Zorque@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          8 hours ago

          14TB or 140TB? The latter is what’s being talked about, so that’s more like 2800 movies. Which more than covers that 1000+ movie criteria.

          • Andres@social.ridetrans.it
            link
            fedilink
            arrow-up
            1
            ·
            8 hours ago

            @Zorque I’m saying that 14TB will only fit 280 (or more likely, less) of those ultra-hq movies, so 140TB (or, in the lead up to that, 100TB, since they’re talking about 5+ years or more before they even get close to 140TB) is reasonable for a 1,000-2,000 movie collection. Obviously I’m being loose with numbers, but the fact that one single movie can consume almost 80GB… well, you can start to understand consumer demand for 100+TB drives.