Backup on Linux

Is there any better backup solution on desktop Linux* than pic related?

Versioning is a must (no rsync). Also ideally it would be able to do a full system restore from bare-metal without needing to reinstall the system and set up all my shit again.


* GNU/ intentionally omitted, i'm not autistic.

Other urls found in this thread:

github.com/veeam/veeamsnap
wiki.archlinux.org/index.php/Rsync#Full_system_backup
borgbackup.readthedocs.io/en/stable/deployment/central-backup-server.html
0bin.net/paste/wzZjdhUmd3E7QhGG#kExWVnfHAVF-u3sd8gT54x WVcoIiubp64xI/vcEXev
twitter.com/SFWRedditGifs

Linux is a kernel

its now the common name for a full operating system including the kernel

GNU can finish hurd if it is so anal about this fact

take your autism elsewhere

You don't need that user. That's why nobody developed such a useless tool.

This. If you brick your system your /home is safe (you made it on separate partition, did you?)

>you made it on separate partition, did you?
It changes nothing. If my disk ain't broken my home is there, no need for a dedicated partition. If my disk is broken, a home partition will change nothing.

By the way I backup some of my files in HOME with rsync (for binary files) and git for (text files) in case of a broken disk

>That's why nobody developed such a useless tool.
Do you mean full volume recovery?

That's exactly what Veeam (pic related) does, you boot it from a recovery disk and it restores the entire system from the last snapshot. No reinstall or anything.

It does the same thing on Windows. I haven't seen anything else with similar functionality.

You can recover files directly off the snapshot using file-recovery mode. Full system restore mode is optional.

I've never had to try recovering on Linux, but it's saved my ass on Windows before. HDD failure - put in new drive, recover.

If I bricked the system, just a file level restore of the content in /home would do. It lets you do that.

>with rsync (for binary files
I must add, you don't need versioning on binary files.

>That's exactly what Veeam (pic related) does
I was talking of the free world, I don't give a fuck about proprietary things.

zbackup is interesting.

>proprietary things
the kernel module is free, idgaf about running a tiny little proprietary agent on my computer

github.com/veeam/veeamsnap

bup and/or borgbackup are usually the best tools if you ask me

There are few situations where I'd use something else.

Typical slaves reaction. Learn to use a computer, and you'll understand why the free world has not develop a useless backup tool. rsync and git can do everything you need.

>Learn to use a computer
i'm not setting up my desktop PC from scratch after a system failure

I don't feel it's as good as bup or borgbackup.

zbackup IMO is like the hacked together crazy variant of bup/borg

freetard detected

Oh sorry. Please don't use GNU/Linux it's not designed for people like you.

What do Google and Facebook use for backups?

Nice autistic disclaimer, autist

>it's not designed for people like you.
its not designed for pathetic NEETs either, but you don't seem to have a problem using it.

ZFS-zRAID and rsync
Sometimes tape

opensSUSE have Snapper snapshots by default on btrfs

>btrfs
I think this is the best free alternative. I'm on ext4 atm, but I'll look into btrfs.

I'm sure you have a Windows installed on your computer, like dual boot or something like that.

Stop replying, NEET.

Stop being an ignorant and learn to use a computer. rsync and git are all you need.

I'm pretty sure it's cloud storage systems. Data may be loaded "onto" them with scripts and rsync.

If you want to run cloud storage at home, there are a bunch of options. MooseFS, Ceph, OpenStack [the respective components], SeaweedFS, ...

> ZFS-zRAID
I really doubt it. This shit is terrible when you got any amount of drives or data.

BUT PEOPLE TOLD ME ZFS IS 120% SAFE

just copy the home directory, chad.

I like duplicity
Backup to cheap S3 buckets from any provider, incremental or full, with multiple chains and individual file restore.
I don't really care about the bare metal restore, can just retrieve the full FS from another system and write it to HDD
Oh and GPG

I've already explained to the tard above that I don't want to have to fiddle around reinstalling the OS and all the applications on my machine

Arch (lol) have a pretty good wiki section on using rsync to achieve a full backup
wiki.archlinux.org/index.php/Rsync#Full_system_backup

But it's not incremental.

We use Veeam at work (t. Windows sysadmin) and it's fantastic. I realise freetards don't like it though.

make copies of the partitions then.

It's pretty safe from all I can tell.

It's just fucking slow AND very hardware hungry (yes, the hardware needs to be good to amazing, but it's still pretty damn fucking slow even then).

And it doesn't manage all that well. With a cloud filesystem like Ceph, you can add a new drive or a new server - it'll just get used and you don't even have to really worry whether the server has 10x10TB drives or 8x14TB or 20x4TB. Just make a guess if the bandwidth and IO performance of the board suffices and use it. Fix it later if it bottlenecks - maybe swap some for SSD or whatever just so the cloud can have another fast drive on the free mainboard / network IO on in the server.

ZFS? No chance. Absolute nightmare to manage at scale. And slow as fuck in any array size if you use RAIDZ - which is ~the only reason why you'd use this. AFAIK nobody doing big storage does anything much with ZFS. XFS is far more popular, and much of that popularity is becuase you can DISABLE features that you have on hardware or your cloud software anyhow.

If you want a full system backup bit for bit just use dd

>And it doesn't manage all that well.
That's why LVM exists.

Duplicity and tools using duplicity were my suggestion before appeared.

They're still good, but the deduplication features and some details on bup/borg are making these a bit better. Also the CLI is a bit better, too.

They're not really any more complex to use either.

their own in house developed SANs

in terms of an individual server - they don't care if it breaks - they'd just rip it and shred the HDD, put in a new one imaged with their customised linux distro

on boot a script configures it for its role. for FB www servers p2p the latest codebase with bittorrent

timeshift for backing up OS state (it's like system restore points for linux)
rsync -av /home/username/ /media/bla/ for backing up home dir

I think this is what i'm looking for, timeshift and rsync should do essentially what veeam does. thanks.

> But it's not incremental.

The fact that some wiki doesn't show you how to do it doesn't mean it's not possible. A single diff command could make it incremental. If you add some options (like not diffing images or video) it's easy.
I don't know about veeam, we use rsync at home and duply at work. Both work like a charm.
Can you backup to arbitrary locations with veeam or only to their services?

It wasn't exactly born to compensate for ZFS' performance and management shortcomings.

And either way, LVM usually won't be a key component of some big company's huge data storage cloud, either. Maybe they use it somewhere where it's convenient for some reason, but it's not the thing that runs the show. The data cloud software and whatever interfaces with it does.

timeshift is nice because it has a gooey and its easy to use
it still uses rsync as a backend

>Can you backup to arbitrary locations with veeam or only to their services?
on the free one you can backup to local disks or remote locations. On Linux, NFS or CIFS/SMB.

I use NFS with my FreeNAS box.

The paid one you can back them up to a single repository, which is what we use at work with our 300ish VMs.

Gooey are rather stupid. You really just want to stick this into a persistent systemd timed unit or persistent fcron job or such so it actually gets done.

Most decent tools make it as easy as "backuptool backup -options [sources] destination". Optionally you might be able to run a verify to check everything is still alright on the other end.

And if something happens, you do "backuptool restore -options destination restorepoint".

I noticed most tools with GUI just fuck this up on the CLI.

i don't use it automated, because i use it with an external hard drive, so the gooey is fine for me

Obviously just make a systemd unit that runs this simple command after automounting the drive when you plug it in.

If you want you can also have it unmounted after.

too lazy

All it should take is to have an entry for the drive based on UUID or LABEL (so it's THIS USB device only) and use x-systemd.automount and x-systemd.requires[/before/after]=foo.service in the options.

Or you could use udev or auto-mount or such to the same effect if you swing that way.

Alas, I can't stop you if you want to manually operate that backup UI thing every time you run a backup.

basically why i use veeam, it runs using cron and handles everything including versioning and backup compression

I'm already using vmware workstation and ubuntu so there's nonfree components on my machine anyway :^)

i dont think i'd use it if the kernel driver wasn't free software though.

> it runs using cron and handles everything including versioning and backup compression
Doesn't seem like a noteworthy thing.

A dozen FOSS utilities also can do this.

I set it up in 3 minutes, though.

Although yes, I could have done it with rsync, cron/systemd units, etc. But I don't trust myself enough - it's pretty critical that it just works.

I'll work on something else with some of the posts here and run it parallel with veeam just in case.

just use clonezilla you fucking mong

no incremental backup

all it is is one button to backup, its no big deal

incremental are for normies
do you even have aspergers let alone autism

Am I the only one who only backs up files on their machines? I have no problem re-installing an OS from scratch every few months or so, all I need access to are my files.

not him, but i have aspergers and I use incremental

reee high functioning faggots get outtt

>Am I the only one who only backs up files on their machines
OP here
No I think it's the other way around, I'm the only one that backs up the entire system with snapshots.

It takes me too long to reinstall them all.

no autism, aspergers, adhd, etc.
mums an antivaxxer, so i didn't get any of that shit.

git
same and im just using an upgraded version of my 2008 install

how old is your mum?

59, i think?

im 26

What made your mother choose to not vaccinate and become an "antivaxxer"?

i was kidding

but seriously no mental disorders

thank you to all non freetard posters, i will look into a scripted rsync solution to closely replicate what i have now

> I set it up in 3 minutes, though.
You can do a bup or borg backup in 3 minutes.

And it only takes about 20 minutes to learn how you create the two files for a systemd timed service or fcron or such and verify that it ran okay. [5 minutes or less if you already knew how to write a service unit for systemd.]

Doesn't only apply to THIS situation either - it's equally applicable to running a mdadm scrub or fstrim or the daily porn pre-loading or whatever. Basic sysadmin stuff, one of the easy methods to make a computer do things you'd otherwise have to repetitively monkey operate.

Waiting for the device to settle and mount (which presumably requires fstab anyhow unless you automount all devices), starting the GUI, clicking the right buttons, checking a bunch of times if it finished, closing gui and unmounting.

Do this 5-10 times or so and automating it would have been faster.

thanks - borg looks good.

I made a wrapper so I can launch borgbackup on all my devices with cron. The cron script contains the archive password, the backup partition uuid und the archive name and calls the wrapper script with those values. Do you want to take a look at it? Backups with borg usually take about 2-5 minutes, it's really comfy

I don't think I need to learn your desired setup and passwords.

Just let it run 2-3 times and then test a simulated recovery (or just a few files) and look at whether it's all okay.

The only suggestion I have for desktop use is to consider anacron / cronie or fcron or something that will run the backup job even if the machine was turned off at the exact moment indicated.

I won't give you my password, setup or anything, I'm just offering the script I made to make it easier to combine borg with cron so you don't have to come up with something yourself. It's made so you can put in different values across your machines while keeping the script in sync in case you change something. It worked reliably for me for the last two years

OP here, would be appreciated.

Ah, a different user just generally offering his script.

I personally am good; I have my absolutely systemd service + matching timer file; setting Persistent=true in the timer is about the maximum sophistication involved; it just does the obvious to run the commands.

> so you can put in different values across your machines while keeping the script in sync
I see. Not something I need, but it reminds me that the borg docs have this bit giving sample ansible / salt scripts in case you use these on multiple machines:
borgbackup.readthedocs.io/en/stable/deployment/central-backup-server.html

just use Windows

>just use Windows
I use it all day at work lol im a SOE admin working on an enterprise win 10 rollout and im just about fed up with windows

my home PC was going to be used to gain some more linux experience, may eventually sit for the red hat certs

0bin.net/paste/wzZjdhUmd3E7QhGG#kExWVnfHAVF-u3sd8gT54x WVcoIiubp64xI/vcEXev

You just have to replace the "USER" in the last line with your username, it's to get inotify to display a notification on the users desktop. You can omit the last two lines if you don't want it.
The file that you throw in /etc/cron.hourly or wherever contains two lines:
#!/bin/bash
/path/to/the/wrapperscript.sh "Backupname" "target uuid" "password"

You should make that file be readable by root only to protect the password if you're paranoid. If you got any questions post them

Yeah, I like to do things myself and have 3 machines to take care of, so this was a good solution for me

Thanks user - this looks good.