There are quite a few choices of brands when it comes to purchasing harddisks or ssd, but which one do you find the most reliable one? Personally had great experiences with SeaGate, but heard ChrisTitus had the opposite experience with them.

So curious to what manufacturers people here swear to and why? Which ones do you have the worst experience with?

Bonehead
link
fedilink
117M

I learned a long time ago that the manufacturer doesn’t matter much on the long run. They all have a bad model occasionally. I have 500GB Seagate drives that still work, and some 1TB drives that died within a year. I’ve had good luck with recent WD Red 4TB drives, but my 2TB Green drives have all died on me. I had a some of the Hitachi Deskstar drives that worked perfectly for years when no one would touch them because of a bad production run. I currently have a Toshiba 8TB that I had never heard of before, but seems to be rock solid for the last year.

Pick a size that you want, look at what’s available, and research the reasonably priced ones to see if anyone is complaining about them. Review sites can be useful, but raw complaints in user forums will give you a better idea of which ones to avoid.

rentar42
link
fedilink
37M

Can confirm the statistics: I recently consolidated about a dozen old hard disks of various ages and quite a few of them had a couple of back blocks and 2 actually failed. One disk was especially noteworthy in that it was still fast, error-less and without complaints. That one was a Seagate ST3000DM001. A model so notoriously bad that it’s got its own Wikipedia entry: https://en.wikipedia.org/wiki/ST3000DM001
Other “better” HDDs were entirely unresponsive.

Statistics only really matter if you have many, many samples. Most people (even enthusiasts with a homelab) won’t be buying hundreds of HDDs in their life.

@blahsay@lemmy.world
link
fedilink
English
27M

Toshiba oddly enough. I’ve been burnt by the big names like Seagate a few times now.

RedEye FlightControl
link
fedilink
English
27M

Hard disks, WD/HGST.

I’ve had good luck with EMC and NetApp for enterprise solutions, Synology for SMB class NAS storage, and rely on TrueNAS/ZFS on supermicro hardware at home, which has been rock solid for years and years.

With spinning disks, I preferred Seagate over Western Digital. And then move to HGST.

Back in those days, Western Digital had the best warranty. And I used it on every Western Digital. But that was still several days without a drive, and I still needed a backup drive.

So it was better to buy two drives at 1.3 x the price of one Western Digital. And then I realized that none of the Seagate or HGST drives failed on me.

For SATA SSDs, I just get a 1TB to maximize the cache and wear leveling, and pick a brand where the name can be pronounced.

For NVME, for a work performance drive, I pick a 2TB drive with the best write cache and sustainable write speed at second tier pricing.

For a general NVME drive, I pick at least a 1 TB from anyone who has been around long enough to have reviews written about them.

Yup, knock on wood, I’ve had lots of Seagate drives over the decades and I’ve never had any of them go bad. I’ve had two WD drives and they both failed

@jkrtn@lemmy.ml
link
fedilink
English
27M

Why does 1TB help with the wear leveling?

An analogy is writing everything on one piece of paper with a pencil. When you need to change or remove something, you cross it out, instead of erasing, and write the new data to a clean part of the paper. When there are no more clean areas, you use the eraser to erase a crossed off section.

The larger the paper, the less frequent you come back to the same area again with the eraser.

Using an eraser on paper slowly degrades the paper until that section tears and never gets used again.

In general and simplifying, my understanding is:

There is the area where data is written, and there is the File Allocation Table that keeps track of where files are placed.

When part of a file needs to be overwritten (either because it inserted or there is new data) the data is really written to a new area and the old data is left as is. The File Allocation Table is updated to point to the new area.

Eventually, as the disk gets used, that new area eventually comes back to a space that was previously written to, but is not being used. And that data gets physically overwritten.

Each time a spot is physically overwritten, it very very slightly degrades.

With a larger disk, it takes longer to come back to a spot that has already been written to.

Oversimplifying, previously written data that is no longer part of a file is effectively lost, in the way that shredding a paper effectively loses whatever is written, and in a more secure way than as happens in a spinning disk.

@jkrtn@lemmy.ml
link
fedilink
English
17M

I thought you meant 1 TB as a sort of peak performer (better than 2+ TB) in this area. From the description, it’s more like 1 TB is kinda the minimum durability you want with a drive, but larger drives are better?

From the drives I have seen, usually there are 3 write-cache sizes.

Usually the smallest write-cache is for drives 128GB or smaller. Sometimes the 256GB is also here.

Usually the middle size write-cache is for 512GB and sometimes 256GB drives.

Usually the largest write-cache is only in 1TB and bigger drives.

Performance-wise for writes, you want the biggest write cache, so you want at least a 1TB drive.

For the best wear leveling, you want the drive as big as you can afford, while also looking at the makeup of the memory chips. In order of longest lasting listed first: Single Level, Multi Level, Triple Level, Quad Level.

@jkrtn@lemmy.ml
link
fedilink
English
27M

This is great, thank you! My next drive is going to be fast and durable.

With the very limited number of drives one may use at home, just get the cheapest ones (*), use RAID and assume some drive may fail.

(*) whose performances meet your needs and from reputable enough sources

You can look at the backblaze stats if you like stats, but if you have ten drives 3% failure rate is exactly the same as 1% or .5% (they all just mean “use RAID and assume some drive may fail”).

Also, IDK how good a reliabiliy predictor the manufacturer would be (as in every sector, reliabiliy varies from model to model), plus you would basically go by price even if you need a quantity of drives so great that stats make sense on them (wouldn’t backblaze use 100% one manufacturer otherwise?)

Politically Incorrect
link
fedilink
English
27M

Definetly Western Digital for used drives, some time ago I sold like 3 of these IDE old drives from like 15 or 20 years ago and they was working perfectly. IDK of nowadays WD drives but used to be very good at least for me.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 31 users / day
  • 84 users / week
  • 216 users / month
  • 846 users / 6 months
  • 1 subscriber
  • 1.42K Posts
  • 8.09K Comments
  • Modlog