How fucked am I from ever recovering anything from this HDD?

How fucked am I from ever recovering anything from this HDD?
i.4cdn.org/wsg/1512428627087.webm

Other urls found in this thread:

calomel.org/zfs_raid_speed_capacity.html
twitter.com/SFWRedditImages

Just restore from backup

I was supposed to back it up this weekend, but got busy and it literally died on my birthday today.

If you had any chance, you fucked it when you opened it. But you already knew that, right?

>exposed platter

Nope.

I've fixed stuck heads before and got it up and running long enough to get the data off. I literally have no idea why it's refusing to read now.

Centrifugal force gets rid of minute dust particles. There are even filters in the drive to catch said particles.

how old are you?

just wipe it clean

I'm 30 now.

just put a magnet in the middle to hold the needle in place

Swap the platters into another drive of the same make and model using clear plastic tape along the side to keep them aligned. Rossman has a Video on it.

Think I fixed it guys.

It is not total doom and gloom when a HDD is exposed.
You might scrape some sectors with a piece of dust that gets in while you have it open.

Definitely do not hover over it as even specs of dandruff will do in a good few megabytes of disk.

Might be a firmware issue.

Buy the same model and FW revision of HDD and swap the PCB, see what happens.

You just totally fucked yourself. Rip your data.

This has happened to me with 3 HDD's

i lost so much OC porn.

I'm starting to think I'm actually fucked

CODE BLUE!, I NEED A CRASH CART IN HERE STAT!

>OC porn
How the fuck do you overclock porn...

Was it your OC?

Wait, how the fuck do your hard drives die instantly?
Don't they die slowly over time?
Are you saying I need backups of my backups?
What about my data hoarding?
n..nani?!

Hoarder and storage enthusiast here.

HDDs will die when they damn well please but not nearly as readily as SSDs. Usually HDDs give some pointers before death or they actively corrupt small portions of data.
Have a backup.
Have a backup.
Backup your stuff.
Use checksums or an advanced filesystem.
No not ReFS. Look into BTRFS or ZFS.
Use redundancy and self healing FS on the hot data and keep checksums on the backup.

Thanks user, I appreciate the advice.
Hobbyist data hoarder, got 2TB archives + 3 TB on my computer.
Pretty sensitive data, especially political, survival, warfare, etc.

XFS is probably the filesystem with the best tools to unfuck something that has been fucked.

Granted BTRFS and ZFS make detection and maybee unfucking bit errors easier. But XFS unfucks the filesystem structure best of all.

>Pretty sensitive data, especially political, survival, warfare, etc.
Sounds not that sensitive (at least where I live, it'd generally be completely legal), but whatever, run encryption on top if you prefer.

Data safety practices are the same as always. Redundancy on the live array (RAID1/5/6, RAIDZ with redundancy, erasure coded data pool on scale-out cloud type storage... whatever you prefer) + then a backup that maybe isn't online at all times.

>XFS is probably the filesystem with the best tools to unfuck something that has been fucked.
>Granted BTRFS and ZFS make detection and maybee unfucking bit errors easier. But XFS unfucks the filesystem structure best of all.

BTRFS and ZFS prevent any fucking before you can blink.
XFS will be the old fashion: I can't actually tell unless metadata is corrupt but I'm 70% sure that something is broken. Time for an unsure rollback.

ZFS has some of the easiest error detection that points out if any outside force has modified disk contents.
You can just clone the file from your backup over top of whatever files it says are permanently damaged, if any get this bad to begin with, and back up again.
I much rather prefer XFS on my ZVOLs.
I hadn't known true speed until I gave that a benchmark. Sweet Christ.

> XFS will be the old fashion
Well, it kinda is. Cloud filesystems -Ceph and friends- tend to be what's used if you need to maintain a lot of storage.
> I can't actually tell unless metadata is corrupt but I'm 70% sure that something is broken
It's like 99.999% with the checksums that are default.

The issue isn't the metadata breaking, it just doesn't do data checksums itself. Granted, you can basically do these with the tools of your choice anyhow, but it's a feature less on the filesystem.

> ZFS has some of the easiest error detection that points out if any outside force has modified disk contents.
Sure, it's pretty good, but in reality you also have various downsides.

> I hadn't known true speed until I gave that a benchmark. Sweet Christ.
That one... uh, ZFS isn't too great at this at least on Linux and BSD; IDK about the Solaris successors.

It's not horrible, but the performance is way off a mdadm array or the theoretical drive speeds or anything else even at an array size of 4 drives, never mind 20. It's not that nobody can work with this, but it's not making me (or pretty much all large scala data hosters) very happy.

Yes, capped from skype, snapchat, private chatrooms etc.

Hard disks aren't backup. Tape or finalized archival burned media (best quality option, makes cheap duplicates) are better.

I turned 30 this year. Feels crappy, doesn't it?

>makes cheap duplicates
Archival quality burned media aren't too cheap per storage unit, and not cheap in terms of time.

Sure, if you have 100GB or less it might not be too much work to back that up on a M-Disc Blu-Ray for $30 or so.

But backing up even just that one 10TB drive to these? Hell, you're turning into a human disc changer for 100 discs, and then you gotta stash them away somewhere in a logical sense.
And then you learn that it's actually pretty damn painful to identify that the next version of the data only changed disc 45, 60 and 64, but you need to somehow manage that too.

Hard disks are superior backup media at volume. Yes, you need to have them in an array and replace them when they die, but it's super simple and time efficient to use them - whether it's doing the first backup, subsequential incremental or differential backups, retrieval, and so on.
They definitely are good backup media.

3/troll

get better bate

Feels even worse to lose all my home made porn :/
I enjoyed nostalgia fapping

>That one... uh, ZFS isn't too great at this at least on Linux and BSD; IDK about the Solaris successors.
ZFS was never about speed. That said it's pretty good as the fs for a fileserver if you pair it with a SSD write cache and a bunch of RAM for read cache.

Kek, I lost mine too about a year ago. Probably for the best since the only homemade porn I made was with my first gf when I was 15/she was 16, so I guess it was technically cp. Still, those were nice memories.

I'm kicking my ass over it because I bought the drive I was gonna clone it all to, just never had time because I was too tired from work.
I had at least 2 months to get it done, but I didn't. I knew it was failing and on my 30th birthday it just died.

I might try heating it up or something later and hope it works again. It did that click and beep thing before but came back after a few reboots.

if it was a multi-platter drive, that was it.

earlier, data recovery specialists can get the drive in a clean room and make an attempt to salvage what's left.

but platters are very precisely aligned. the data is read in parallel along all the heads along the cylinder (think of it as sectors on top of each other going from one platter to the next). if it's misaligned, you'll never get it realigned. that have tools for taking out platters and placing them back into other drives that don't change the alignment of the platters.

There's some basic tuning you can do to XFS and ZVOLs.

Turn off native XFS checksumming and opt for the ZFS approach.
(Perhaps disable XFS write barriers? Will ZFS CoW protect the underlying FS?)
XFS uses CRC so the closest you can get is fletcher in terms of rapid and acceptable levels of integrity checking. Or Edonr if you want something marginally better than SHA speeds.
If nothing else: SHA512 on 64 bit hardware, it operates faster than default SHA256.

Enable LZ4 compression unless the following apply:

Use a small block size on the ZVOL for faster random I/O. Compression ratio will take a big hit as it operates on these blocks.

Using any form of encrypted data: Just burning CPU cycles. Set to ZLE if the use case involves discard flags or is kept sparse.

>Hard disks aren't backup.
Hard disks are excellent backups, especially for home use. They are cheap, readily available, require no special equipment, reusable, and last a long time. The thing that kills drives isn't time, its power cycles and online time so they will last a long time on the shelf. Any concern about long term failure can be overcome with duplicates for less than the methods below.

>Tape
Tape has high storage density which is why enterprise use them, but few other benefits. The media and drives are expensive and not easy for the average person to get. The lack of random read is a huge downside if you are doing anything other than a full restore. The tape is also sensitive to environment conditions which is why they are stored in climate controlled vaults. That may or may not be a problem depending on where you live. If you are planning on long term backup, you may need to buy a second tape drive and put that in storage too since there are many different tape formats and you'll need the same kind of drive to read the data back.

>finalized archival burned media
Not many people have these drives anymore so that is an issue and they'll be rarer to find in the future. It is expensive to keep buying blanks. It is possible to warp them over long term storage so that is a concern. Swapping disks and labeling them all is a huge pain if you need to backup a lot of data. The dyes in the writable disks go bad after a number of years,

how many homemade porn do you have and can you describe the scenes in details?

>technically cp
Not technically. Actually CP. As in felony with several years in jail and registering as a sex offender

One would think an individual couldn't make a conversation about homemade youth porn weirder, but sir you hit the mark.

Right, but it's kind of silly when you consider that I'm one of the actors, I was also underage, and my girlfriend was even older than me at the time. Either way, it's gone now.

> ZFS was never about speed.
It's really bad though as you get to more drives. Have a blog by some guy who obviously likes ZFS:
calomel.org/zfs_raid_speed_capacity.html

24 drives and RW performance is 190MB/s in raidz2 (raid6). Maybe less than you'd conservatively expect from 4 really weak 5200RPM SATA pre-NCQ consumer drives (yea, that's a stretch, they will have NCQ).

But that guy is using 7200RPM SAS disks that can do 150MB/s read or write each.

Even the 400MB write only he gets with these 24 drives is less than 3 of these drives if you take the raw speed they could run at. [Of course you'd not expect that exactly, but I feel 4-5 drives alone should be able to break 400MB/s no problem.]

> if you pair it with a SSD write cache and a bunch of RAM for read cache.
Unfortunately, the above situation is basically what you get with a fair bit of hardware. I'm sure throwing some more SSD caches or more RAM at the problem will make speeds a bit better, but it still is simply bad, plus then you paid even more for non-drive hardware.

I hope they make it run better sooner rather than later. For now, I feel giving a warning about it's performance and scaling issues is pretty damn fair.

Oh god.
This is why vdevs are recommended to stay below 12 disks...

Even that... you can see the 6 drive RAIDZ2 (again RAID6 pendant), right?

71MB/s rw performance is also one number you probably don't want to see with 6 nice SAS drives. This isn't on some potato J1900 with 2GB RAM total and a choking third rate SATA controller either.

>not having daily incremental backups

Hand in your Sup Forums card

The rw figure is somewhat misleading. They mentioned turning off drive caching which is really going to mess with the performance. Plus these are effectively random read write tests not sequential like the per drive figure being mentioned. 71MB/s on random read write is actually pretty good considering single drive random performance for that is usually under 10MB/s.

There is no way he'll get the same controller firmware doing this.