How do you back up your data? What tools do you use and why...

How do you back up your data? What tools do you use and why? Looking to improve my strategy of copying everything to an external hd every week.

Other urls found in this thread:

borgbackup.readthedocs.io/
github.com/bup/bup
twitter.com/AnonBabble

rsync

no backups because it costs too much.

i write all the source code down long hand in a notepad. I then put this notepad in a steel box and bury it about 12 feet deep in the local park and then record the gps coordinates to then have them tattooed on me alongside the date of the back up. I try to do this about once a month.

This wtf is going on I got like 4tb HDDs 5 years ago for less than they cost now and 8-16tb is stupid expensive

Besides Netflix and YouTube make all my old rips superfluous

I copy everything to an external hard drive once every two months or so.

I run a computer that has a RAID 1 array in it, shared over samba, and I throw my backups in there.
It's also got an exposed ssh port to the interwebs so I can backup data from wherever I have internet access to a secure backup location.

It's sweet as fuck m8

You encoded at Youtube tier?

Mainly I only listen to obscure bands and watch weird movies since pop culture Shat itself in the last decade.

I still get the odd flac

I accepted the impermanence of things and sold my tape drive.

I have like three hundred 500GB - 1TB hard drives that I get out of old set-top boxes that people throw away at the recycling center. I just copy my data to them and then store the drives in different parts of my house and my families houses. I keep some inside old microwaves to protect from EMP. I buried some in my garden which I will dig up in a few years time, just to see if they survive that.

hmm, what do you store in them? or do you do it just for science? all that seems like an overkill. i, personally, plan to invest some money in m-discs but i'm not sure if it's a meme or not. i've heard people say that the glue between the layers of disc would deteriorate faster than the data layer itself which makes sense. haven't heard any big complaints, though.

I just backup the stuff on my main hard drive; photos, music, documents, etc. I have all these free hard drives laying around so I put them to use. The alternative would be to extract the magnets and then toss them.

>I have like three hundred 500GB - 1TB hard drives that I get out of old set-top boxes that people throw away at the recycling center.
where do you live to get those? i mainly just see dvd-r's and cd-r's that haven't been used.

what are the results of the burying experiment?

I live near an electrical recycling point similar to this picture and I visit it every day. It's always overflowing with random stuff. Usually it's just broken kettles, but as I go every day I often find good stuff. I've found fully working Windows 10 laptops, desktops, tvs, DAB radios, battery chargers, microwaves, scanners, laser printers, etc. Sometimes whole truckloads of sever equipment turns up. Almost all the electrical stuff I have I get for free. Anything I don't need I sell on ebay.

Forgot picture

They are still buried in my garden so I don't know yet. I suspect water will leak into the containers and ruin them.

damn i'm envious of you. we don't have those here.

>How do you back up your data?
RAID, NAS, looking into data cloud for the ability to do multi-machine setups with random drive sizes and different redundancy more easily.

> What tools do you use and why?
Bup, but for you the very comparable borg might be great:
borgbackup.readthedocs.io/
github.com/bup/bup

You schedule these in some cron variant or a systemd timer if at all possible, doesn't take much work.

I literally just copy and paste my home folders to an external HDD every couple days and delete the old one. I use the same external disk for my laptop and desktop to do my biweekly backups. There's no real point in making a full system backup since I only use a few basic programs outside of the ones in the default install of Fedora KDE. The programs I do use aren't usually modified in any special way, so I have no need to backup any configurations outside of what's already in the home directory. Even with Firefox I don't keep any history and all of my bookmarks sit in a text file on the desktop, and I only use three extensions that I don't configure at all. I could completely reinstall my OS from a flash drive and copy over my home folder and reinstall a few programs in less than 30 minutes if I had to. Just recently upgraded my main external disk to a 4TB WD My Passport for like $100.

Aside from that, once or twice a month I copy the home folders from the external disk over to my home server using SSH, which is just a Raspberry Pi running a default install of OpenBSD 6.2 with two more of the 4TB My Passports attached via USB. I know it's a pleb tier setup but it just works for lazy fucks like me. One of the 4TB disks is dedicated to holding laptop backups and the other is for desktop backups. Laptop backups include everything in my home folder, so that also includes the backup zip files from my Nexus 5 running Lineage OS that I can make in TWRP. I usually backup my phone once every 2 months or so because there's nothing important on it besides contacts and message logs.

Let this be a lesson to ricers and special snowflakes as to why it's great to use the ass ugly defaults when they just work. Winfags should also check out the free version of Macrium Reflect. It's closed source but doesn't cost anything for home users, and makes it really easy to clone a running Winshit installation to another disk. I personally just use Windows in a VMs now.

So, you're a dumpster diver?

>dd if=/dev/zero of=/dev/sda
No.

Maybe?

Yeah even noobs know if=/dev/urandom is better.

I use my backup.sh
Here is source code:
#!/bin/bash
dd if=/dev/sda of=backup_$(date +%d.%m.$%y)

Don't you at least lz4 / zstd compress it?

raid0 my main operating system on ssd
btrfs snapshots then snap-sync them to freenas machine with raid10 storage pool and zvols for each machine i back up

Compression is for poorfags who can't even affort Petabox to store their precious backups

RAID is not backup

3 different mediums in 3 different places is

I don't mind if you throw more storage space at it, but frankly zstd (lower settings) or lz4 are close to no brainers due to them reducing IO, and it's very unlikely that it wouldn't be faster / better with almost any device you could copy from / to.

>RAID is not a backup
never said it was. that's what the snapshots are for. the RAID10 is for hardware failure. I'll admit that the current setup doesn't allow for recovery after a fire. I rebuild my machines from a script and legit important stuff on cloud storage devices. unfortunately all my movies videos and downloaded games will be lost in a fire

> RAID is not backup
Backup or even snapshot on RAID is actually a backup, and a safer one than a single drive backup.

0/8 bait

>RAID10
Waste of space at this point, why not do RAID5/6?

My important data gets synced via nextcloud to my server. Every night at 3 AM it makes a full incremental system backup using borg. It purges old backups on the go, but i can always go back like 6 months if i wanted. At the beginning of every month i copy the entire backup repo to an external drive which i keep offsite.

I've got more space than I need locally. Historically raid10 has better performance. I also don't want to rebuild a 5 or 6 array after disk failure, that would take forever

> Historically raid10 has better performance.
The performance difference has been marginal to nonexistent for something like 5-10 years now.

> I also don't want to rebuild a 5 or 6 array after disk failure, that would take forever
No, I think you misunderstand how that works.

You literally just fill up that replacement drive again that you'd also fill up on RAID10. Or maybe two for RAID6, at which point RAID10 would quite probably be dead (it's up to luck which second drive it hits). The other drives stay basically untouched.

>Copying a R/W disk with dd

parity calculations will take longer than copying sectors. there is no scenario where a write on raid10 will be slower than a write on raid 5/6

> parity calculations will take longer than copying sectors
Eh, not really. Look, it can constantly read (from more drives in theory - not that this helps unless the replacement is a SSD whereas the others were HDD), the CPU can constantly do the parity calculations, and the replacement drive can constantly write. With some buffers/caches that take a not really significant amount of RAM.

Surely there is some time lost here and there unless this is all set to REALTIME DO NOT DO ANYTHING ELSE - type of priority with SW RAID, but the difference isn't usually worth bothering with.

[Also, I think many don't even care to lift the rebuild speed limits mdadm has by default to easily allow normal usage to take place while the array rebuilds.]

>critical data
Encrypt and upload to gmail/gdrive/etc

>non critical media
Don't give a fuck

regardless, my scenario is best suited to raid 10 because I don't care about the size of the storage pool, I care about the performance and recovery from failure. my hardware won't be able to write parity as fast and no software solution can do raid6 writes as fast as raid10.

The scenario where I would use 6 over 10 is a scenario I can easily avoid by buying another two disks.

Stupid question: Why the fuck are new tape drives so costly? Supply and demand?

This is how mine is setup:
1. 5 Client pc's get backed up daily automatically by my server. Clients can be in sleep mode, makes no difference, server will "wake" them, run backup, then put them to "sleep" again. Backups retained for 1 year
2. Server is connected to UPS. It's data volumes are in a Raid 1 and Raid 5 configuration. I use Macrium Reflect to handle backup jobs for the server. Depending on how much new data is written or once a month I run a backup of the volume(s). I use Previous version/Shadow copy on the volumes so if a file/folder gets deleted by mistake I can restore it without having to touch my backup (s).
3. NAS 1. (12TB Raid 5) - Full Backups of Server's Data Volumes. Kept shutdown when not in use.
4. NAS 2 (4TB Raid 0) - 2nd archive image backup Copy of core files. Kept shutdown when not used.
5. 3TB USB External - Server System Image/Backup of Client backups/ 2nd Archive image copy of X-Rated and E-Book Collections.Shutdown when not used.

With Macrium I can restore the server, the client backups + both data volumes if the whole server or a raid array should fail. With the server's client recovery option I can restore any client with a CD/USB key. I can recover any file/folder from any client "joined" to the server from any other working client.

> my hardware won't be able to write parity as fast
It takes some amazingly bad CPU to actually not be able to do these parity calculations. I'd say something like an early atom still didn't hit significant issues on a 6 drive array (with the required PCI card to even run those drives fast).
And even then you probably need to be silly and use kernels older than 3.12 (4-5 years ago) so you still have the single threaded only raid5/6, I think I wouldn't have had issues with a newer kernel.

But I guess if you hit some IOPS bottleneck or such... maybe?

> The scenario where I would use 6 over 10 is a scenario I can easily avoid by buying another two disks.
More wasted drives, lower safety margin (2 drive failure isn't guaranteed to retain your data on RAID10). But if you say it's worth it, suit yourself.

I'm going to guess they could supply a lot more if needed.

Small-ish market though, and enterprise pricing works for them.

I've recently done some research on the subject. I'm brazilian, the lowest cost here per TB are for 25gb blurays and 4tb hdds.

Any tips on how to schedule auto backups of VMs from ESXi (free license)? HPE VM explorer is nice but you have to buy it to schedule backups, and ovftool is good and fast but is all done manually...

Most of my data is torrents, I'll just redownload them if anything happens.

How am I supposed to back up 22 TB?

...

> How am I supposed to back up 22 TB?
No problem whatsoever, just synchronize to 2-3 drives on a NAS with any synchronization tool you like.

Or since it's less important data and you may not need to manage deletions or such directly, dump it drive by drive on 6-16TB drives.

About the tools, you could use rsync or syncthing. Because it's immediately convenient to actually verify everything was (re) copied correctly with these. They're definitely not the only options around, though.


Either way: You're not anywhere near the limit of what's feasible at home with that, you could run 120 drives in only two 4u rackmounts if you wanted.

Would probably cost you like 2 years of your available money, but it doesn't require rebuilding rooms, installing new power lines, or getting amazing skills or super special hardware (beyond the 4u cases with cards).

Macrium Reflect, as I mentioned before, is a nice easy to use but effective program to handle both system recovery and data backups. It does backups/drive clone/recovery and that's it. It supports GPT and Dynamic/Raid volumes. The volume size it can handle is practically unlimited (there is limit, but trust me, even with 6x 12TB drives in a raid 5, there is plenty of room left over). You can clone gpt/mbr drives/partitions, you can even copy dynamic volumes from one to a new/bigger one, just create the "target" volume first via Disk Manager first though. The Supported os you can restore is XP - Windows Ten plus all variants of Server 2003 - 2016.

NTFS get's a bad rap for some reason, dunno why. Long as you build your server properly then it and the OS will run smooth as butter for years. Large corporations use Windows Server and they've got far more critical data stored on them than what the home user has. Build the server with the understanding that once the configuration is dialed in, when every thing works as it should (Users access rights/permissions, Remote Access,etc) you basically leave it the fuck alone and let it work. The only time you RDP into it is to run a backup job or apply a patch, or check event logs/smart data. You don't install various shit on it unless it's core to what you need (Media Streaming/Antivirus for ex). Use a UPS, it'll save your server from spike/surge damage, eliminate a source of data corruption (sudden power loss can/will cause data to get fucked up), eliminate Raid rebuilds due to power failure. They don't cost a lot either, and the battery last 4-6 years and can be replaced fairly cheap. Keep the server cool, drive temps at 60c or under. Drives will fail due to excessive heat, long term exposure only shortens the drive life span. More drives you add, the more heat gets generated so more cooling is needed. The only time it should restart/shutdown is due to upgrades or os patches or if a hdd in a raid needs to be replaced.

>Macrium Reflect, as I mentioned before, is a nice easy to use but effective program
It's okay to wrap around Windows' VSCS and shield you from some a bunch of Windows insanity that may get in the way of making sane backups on that silly OS.

OTOH it's a pretty clumsy solution even in the necessarily paid version when you do differential copies of bulk data. Rsync or syncthing are IMO better there, even on Windows.

> NTFS get's a bad rap for some reason, dunno why.
Maybe because there are a lot of better filesystems.

> Large corporations use Windows Server and they've got far more critical data stored on them than what the home user has.
Large companies have to use (pretty annoying) backups more often than you'd think if they do this.
So maybe most large corporations have generally switched to virtualizing these "servers", with an EXTREME push now to move these VMs onto server clouds, which are Linux rather than Windows.

Doesn't mean MS server is instantly dead the moment you run it by any means, but it's too stupid in many, many ways regardless, and the holdouts that keep using Windows server primarily and unvirtualized usually are due to, uh, IT staff "limitations".

>How do you back up your data?
I use 2 MicroSD cards and 4 USB sticks and store 1 MicroSD and 2 USB sticks home and the other set at another location. I sync those at home, go swap and sync the other set once a month.

The cards are encrypted with LUKS and I do the backup with rsync.

I also copy a current snapshot to an external 4 TB HDD.

This obviously limits the amount of data I can backup (to 64 GB right now) but it's enough for data I've generated (=code and documents and spreadsheets I've made, pictures I've taken and so on).

I don't back up music and movies and things like that (which are on a RAID). It would be sad to lose the collection but those things are mostly "recoverable" by downloading it again (unlike a photo which can't just be re-downloaded).

The 4TB disk doesn't give you enough space to retain a history of backups?

It does, each snapshot is just 64 GB. So I have a history but it's just on that one HDD. This is a weakness that I should probably fix (like get another and store one at another location).

> It does, each snapshot is just 64 GB
Ah, it's a fixed size and you do full snapshots.

> This is a weakness that I should probably fix (like get another and store one at another location).
Probably. If you do. I'd suggest connecting that next drive to your LAN so you can automate that backup.

Bought an external 1TB drive. I regularly save all the data that is worth something.

I already backed up the most important things. I just refresh the backups basically.

What do you do when say torrent of "show x" or "film c" dies/no longer exist? Just say fuck it? I keep a two backup copy of all my films/shows/music/E-books and Porn for just that reason and cause it'd be a time consuming pain in the ass to re rip a lot of it from dvd/cd. I've got this one show, torrent is only way to get it (no disc release), so if I had no backups and my server shit itself and torrent was dead, then that show would be lost forever.