• GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    3
    ·
    1 day ago

    no thanks Seagate. the trauma of losing my data because of a botched firmware with a ticking time bomb kinda put me off your products for life.

    see you in hell.

    • Vinstaal0@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      29 minutes ago

      Some of Seagate’s drives have terrible scores on things like Blackblaze. They are probably the worst brand, but also generally the cheapest.

      I have been running a raid of old Seagate barracuda’s for years at things point, including a lot of boot cycles and me forcing the system off because Truenas has issues or whatnot and for some fucking reason they won’t die.

      I have had a WD green SSD that I use for Truenas boot die, I had some WD external drive have its controller die (the drive inside still work) and I had some crappy WD mismatched drives in a raid 0 for my Linux ISO’s and those failed as well.

      Whenever the Seagate start to die, I guess ill be replacing them with Toshiba’s unless somebody has another suggestion.

    • MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      I had a similar experience with Samsung. I had a bunch of evo 870 SSDs up and die for no reason. Turns out, it was a firmware bug in the drive and they just need an update, but the update needs to take place before the drive fails.

      I had to RMA the failures. The rest were updated without incident and have been running perfectly ever since.

      I’d still buy Samsung.

      I didn’t lose a lot of data, but I can certainly understand holding a grudge on something like that. From the other comments here, hate for Seagate isn’t exactly rare.

    • muusemuuse@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      13 hours ago

      I can certainly understand holding grudges against corporations. I didn’t buy anything from Sony for a very long time after their fuckery George Hotz and Nintendo’s latest horseshit has me staying away from them, but that was a single firmware bug that locked down hard drives (note, the data was still intact) a very long time ago. Seagate even issued a firmware update to prevent the bug from biting users it hadn’t hit yet, but firmware updates at the time weren’t really something people thought to ever do, and operating systems did not check for them automatically back then like they do now.

      Seagate fucked up but they also did everything they could to make it right. That matters. Plus, look at their competition. WD famously lied about their red drives not being SMR when they actually were. And I’ve only ever had WD hard drives and sandisk flash drives die on me. And guess who owns sandisk? Western Digital!

      I guess if you must go with a another company, there’s the louder and more expensive Toshiba drives but I have never used those before so I know nothing about them aside from their reputation for being loud.

      • needanke@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        And I’ve only ever had WD hard drives and sandisk flash drives die on me

        Maybe it’s confirmation bias but almost all memory that failed on me has been sandisk-flash storage. Zhe only exception being a corsair ssd which failed after 3 yrs as the main laptop drive + another 3 as a server boot and log-drive.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      22 hours ago

      Can someone recommend me a hard drive that won’t fail immediately? Internal, not SSD, from which cheap ones will die even sooner, and I need it for archival reasons, not speed or fancy new tech, otherwise I have two SSDs.

      • AdrianTheFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 hours ago

        I think refurbished enterprise drives usually have a lot of extra protection hardware that helps them last a very long time. Seagate advertises a mean time to failure on their exos drives of ~200 years with a moderate level of usage. I feel like it would almost always be a better choice to get more refurbished enterprise drives than fewer new consumer drives.

        I personally found an 8tb exos on servedpartdeals for ~$100 which seems to be in very good condition after checking the SMART monitoring. I’m just using it as a backup so there isn’t any data on it that isn’t also somewhere else, so I didn’t bother with redundancy.

        I’m not an expert, but this is just from the research I did before buying that backup drive.

      • lightnsfw@reddthat.com
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        20 hours ago

        If you’re relying on one hard drive not failing to preserve your data you are doing it wrong from the jump. I’ve got about a dozen hard drives in play from seagate and WD at any given time (mostly seagate because they’re cheaper and I don’t need speed either) and haven’t had a failure yet. Backblaze used to publish stats about the hard drives they use, not sure if they still do but that would give you some data to go off. Seagate did put out some duds a while back but other models are fine.

        • tempest@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          ·
          17 hours ago

          The back blaze stats were always useless because they would tell you what failed long after that run of drives was available.

          There are only 3 manufactures at this point so just buy one or two of each color and call it a day. ZFS in raid z2 is good enough for most things at this point.

      • Ushmel@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        19 hours ago

        My WD Red Pros have almost all lasted me 7+ years but the best thing (and probably cheapest nowadays) is a proper 3-2-1 backup plan.

      • daq@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        22 hours ago

        Hard drives aren’t great for archival in general, but any modern drive should work. Grab multiple brands and make at least two copies. Look for sales. Externals regularly go below $15/tb these days.

        • Ushmel@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          19 hours ago

          Word for the wise, those externals usually won’t last 5+ years of constant use as an internal.

          • daq@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 hours ago

            I’ve got 6 in a random mix of brands (Seagate and WD) 8-16Tb that are all older than that. Running 24/7 storing mostly random shit I download. Pulled one out recently because the USB controller died. Still works in a different enclosure now.

            I’d definitely have a different setup for data I actually cared about.

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          21 hours ago

          they were selling wd red (pro?) drives with smr tech, which is known to be disastrous for disk arrays because both traditional raid and zfs tends to throw them out. the reason for that is when you are filling it up, especially when you do it quickly, it won’t be able to process your writes after some time, and write operations will take a very long time, because the disk needs to rearrange its data before writing more. but raid solutions just see that the drive is not responding to the write command for a long time, and they think that’s because the drive is bad.

          it was a few years ago, but it was a shitfest because they didn’t disclose it, and people were expecting that nas drives will work fine in their nas.

          • IronKrill@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 hours ago

            they were selling wd red (pro?) drives with smr tech

            Didn’t they used to have only one “Red” designation? Or maybe I’m hallucinating. I thought “Red Pro” was introduced after that curfuffel to distinguish the SMR from the CMR.

            • WhyJiffie@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 hours ago

              I don’t know, because haven’t been around long enough, but yeah possibly they started using the red pro type there

          • Ushmel@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            19 hours ago

            I’ve had a couple random drop from my array recently, but they were older so I didn’t think twice about it. Does this permafry them or can you remove from the array and reinitiate for it to work?

            • WhyJiffie@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 hours ago

              well, it depends. if they were dropped just because they are smr and were writing slowly, I think they are fine. but otherwise…

              what array system do you use? some raid software, or zfs?

      • GreenKnight23@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        15 hours ago

        https://www.eevblog.com/forum/chat/whats-behind-the-infamous-seagate-bsy-bug/

        this thread has multiple documented instances of poor QA and firmware bugs Seagate has implemented at the cost of their own customers.

        my specific issue was even longer ago, 20+ years. there was a bug in the firmware where there was a buffer overflow from an int limit on runtime. it caused a cascade failure in the firmware and caused the drive to lock up after it ran for the maximum into limit. this is my understanding of it anyway.

        the only solution was to purchase a board online for the exact model of your HDD and swap it and perform a firmware flash before time ran out. I think you could also use a clip and force program the firmware.

        at the time a new board cost as much as a new drive, finances of which I didn’t have at the time.

        eventually I moved past the 1tb of data I lost, but I will never willingly purchase another Seagate.

      • skankhunt42@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        22 hours ago

        In my case, 10+years ago I had 6 * 3tb Seagate disks in a software raid 5. Two of them failed and it took me days to force it back into the raid and get some of the data off. Now I use WD and raid 6.

        I read 3 or 4 years ago that it was just the 3tb reds I used had a high failure rate but I’m still only buying WDs

        • HiTekRedNek@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          15 hours ago

          I had a single red 2TB in an old tivo roamio for almost a decade.

          Pulled out this weekend, and finally tested it. Failed.

          I was planning to move my 1.5T music collection to it. Glad I tested it first, lol.