Where were you when GitLab sysadmin irreparably deleted 300GB of production data?

Where were you when GitLab sysadmin irreparably deleted 300GB of production data?

I was at home drinking brain juice.

Other urls found in this thread:

youtube.com/watch?v=nc0hPGerSd4
fossil-scm.org
chiselapp.com/
en.wikipedia.org/wiki/Vocative_case
grammar-monster.com/glossary/vocative_case.htm
twitter.com/SFWRedditImages

Title is a lie there is a 6h old backup.

Their status google doc says "out of 5 backup/replication techniques deployed none are working reliably or set up in the first place"

poping

>github promising software start-up
>meritocracy philosophy
>sjws say meritocracy is abelist,racist,sexist etc
>github back pedals, says it's not a meritocracy
>hires said sjws
the rest is history

>SaaSS
not even once

i warned you about remote services bro
i TOLD you

That's what you get for choosing a shitty repo hoster.. I'd be pissed if this happened to my repos on github and code/issues/PRs/comments/settings/userdata would have been destroyed. Github=SJW is just a meme and unless you're not an edgy Sup Forumstard, it's the best service of its kind

>do backup
>think you are on the safe side
>backups were only tested once at best at dont work when you need them
story of my life.

>gitlab
>github

what the fuck, i always thought lab was maintained by hub. well shit whatever

Few years ago, wasn't there another startup that lost all its data and shut down?

Looks like GitLab is dead. RIP.

PS: I bet their sysadmin was some dumb SJW merchant who secretly worked for GitHub.

Based El Reg.

Uh, did you read the article?

The six hours old snapshot was a fluke manual LVM snapshot run, normally they are 24 hours. The SQL_dumps weren't running at all because of mis-configuration, producing tiny little files and failing silently. Webhooks will need to be rolled back to the 24 hour backup since they were removed in the 6 hour one because of a synchronization process (meaning at best 18 hours of updates will have no webhooks but possibly all 24 hours at worst). Lastly, their replication of their backups from Microsoft's Azure to Amazon's S3 for what I assume is vendor agnostic redundancy has sent no files at all ("the bucket is empty").

It's like they thought out everything but never made sure any of it was working.

actually makes me feel pretty bad for them, cant remember the last time i test restored one of my production backups

>be Sup Forumstard
>don't know what you're talking about
>enter random thread looking to politically agitate or blame things on SJWs somehow
>get btfo as not knowing what you're talking about
>WELL WHATEVER MAN
>the rest is history

Sup Forumstards are just right wing SJWs at this point, looking for something to be outraged about

why would you test restore anything? just fucking check the backup! as if you wouldnt sound alarms when you db has the same size as your micro penis. nobody ever botherd to check anything thats what happend there just typical incompetent people at work

the amount of planing in this case is 0 just piled up "solutions". think about it. you would want to make sure you got enough storage for your backups, like monitor them to make sure you dont run out of memory, and optimise to not have overflow. if you really cared at least that is.

>set up production sever
>co-worker from dino years insists he helps
>only thing he can do is backups
>test backups one week later
>all daily backups are 1kb

it happens.

This is why you should always make your backups as basic as possible.

Mine are just tar archives. The only way they can fail is if you can't untar them for some reason.
I suppose I should still run tests on them, seeing you can. Maybe I will build that into my backup script to run a test on each archive as it completes to check that it is a valid tar archive and has files in it.

nyeah i'd test restore, who know what kind of shit could go wrong, i do a full backup daily and logs every 30 mins so if everything goes to shit i'll only lose 30 mins of data

>wrong directory deleted
>backup failed
>picture shows 3 women and a minority
how rude

>why would you test restore anything
Because you're not backing up anything if you haven't tested a restore.

The fact that the backups were tiny just adds another layer of wtf are you guys doing to this story. I make tar.xz snapshots of my code folder contains sub folders that are hosted on gitlab lol). The first thing that I do after I do this is run ls -lh on the resulting file and compare the size to previous snap shots. A day of work should increase the compressed size by a few kB. The git repos get pushed to gitlab. The snap shots get copied to another partition in to a folder that needs root access to write. Then they are copied to a usb stick.

I just do this as a hobby and nobody but me actually cares about this code. I've rm -rf ed my code before and it was easier to grab a tar.xz from the read only folder than log in and get the url to clone off Gitlab. Surprise it worked. Can I get a job at Gitlab? Apparently my just for shits and giggles back up is more reliable.

kys shit for brains.

GitLab had diversity training. They were hiring people based on the color of their skin and not based on meritocracy.

Now, GitLab = DEAD

$20 million in VC money flushed down the tubes.

Another SJW company down in flames.

PS: just fucking kill yourself or go back to your safe space. This is Sup Forums, motherfucker.

Thanks for proving my point, idiot

>everyone I don't like is automatically a Sup Forums representative

same fag

Thanks for proving mine, you worthless piece of shit. Just wait till we free the shit out of your shithole soon.

Because you always test your backups.
You have to know that you will get data out of them. 1TB of junk won't help you.

>kys
>plebbit spacing
you have to go back

Automate it into a weekly task with a report emailed (or whatever you prefer)

go home reddit

> why would you test restore anything?
Because that's the only way to actually ensure files aren't damaged in any way.
Yes, it's expensive, and you can try list its content, but it's the only true way to verify it.

This. People are always forgetting the most important thing about backups.

Backups are worthless
Restores are priceless

Follow this rule and you'll do just fine.

So this the end for them?

> Managing backups in enterprise
>30+ servers written to tape every night
>Full weekly backup runs nearly 30 hours
>4TB on an autoloader
>Diligently reading backup logs every day
>Making really sure every relevant server is listed with appropriate backup sizes
>One day try to restore
>Every tape is empty
>Call Symantec
>Yeah that's a known bug, we'll have a patch in a few weeks. In the meantime, use robocopy or something
>mfw

Like ruling over a country, only for everyone to leave.
Like spending 10+ hours a day at a recycling facility sorting through trash, into countless, highly specific piles, only to be told one day that for the last month they've just been taking all your piles and throwing them all into landfill.

Dare I ask, did you need to restore or was it just a test?

This is why you should just host a private gitlab on your own server.

>Call Symantec
>Yeah that's a known bug, we'll have a patch in a few weeks. In the meantime, use robocopy or something

closed sores: not even once

good thing git is distributed.
Worst case you cannot pull/push for while they sort out their bullshit, but it is not like you are likely to loose data because of this.
If you were using for anything serious you would just host it on your own server anyway.

Fucking aur can't download anything from them.

youtube.com/watch?v=nc0hPGerSd4
LUL

Isn't gitlab self hosted or am I wrong?

Holy shit, go outside you actual autist. People here don't all have restricted interests like you.

It is free and open source, so people have been self-hosting it for a while. The superior option imo. If you don't want to self-host, just use a more established host like bitbucket.

It was "needed", but it was some spreadsheet the receptionist only gave a mild shit about.

'no'

stay safe bitbucket don't fall for the SJWs

Make this less of a meme and come up with a decent Windows backup strategy without closed source.

Of course, anything is better than backup exec.

They already admit issues and merge requests are gone. It's possibly even worse.

I was at home pushing commits to my self hosted git repo

When I've heard people talk about gitlab it has always been in the context of self hosting it. Didn't even know they had hosted service.

>not running your own version control
serves you right

Neither did I. I fail to see what it has over the more popular git+(dev stuff) hosts other than the ability to self-host.

isn't the ci stuff available on public hosted version ?

Not currently (kek)

Reminder to use Fossil instead of git.

fossil-scm.org


Host your code on Chisel.

chiselapp.com/

>fossil-scm.org
looks bloated desu

stupid nigger

>be sjwcucktard
>don't know what you're talking about
>enter random thread looking to politically agitate or blame things on Sup Forums somehow
>get btfo as not knowing what you're talking about
>MUH TRIGGERS
>the rest is history

This is more of a Linux issue than anything else.

Nothing of this sort happens when you're running real production systems like FreeBSD, with ZFS, and incremental 15-min snapshots for databases. Much easier to transfer snapshots between hosts as well.

>he called something bloated when comparing it to git
Only on Sup Forums, everyone. Only on Sup Forums.

What does fossil do that git doesn't?

hey look everyone, an actual autistic

Has a sane, non-cryptic command-line interface.

For example, which makes more sense:

git reset --hard HEAD^

Or:

fossil revert

?

literally this

Not him but I never touch Reddit and have shitposting here for the last 8 years spacing similarly to that
Makes your posts easier to read if you group and space it right, avoid walls of text

This looks like git+all metadata/documentation/workflow tools normally offered at most git hosting servers, but now integrated into the versioning.
Looks interesting, but I don't really see the point in including all 'meta' stuff into the same versioning as the regular stuff, as you probably want a single documentation endpoint for all versions, not one per branch. But perhaps there are features fixing that, I don't know.
Also,
>no rebase
Wew, enjoy your clusterfuck history branches.

A few days ago my gitlab account was added to some weird empty group, www185096com143, along with hundreds of other users.
At first I thought it was a phishing mail.
Then I logged directly into gitlab and found it to be true.
Probably was a sign to come

If you understand how it actually works, the first.

Git only needs rebase because it has nearly no reporting tools to begin with and has to make up for it by pretending the development was linear.

Hey look everyone, an actual jobless NEET who's never even set foot on a data center.

this

Your mom's basement doesn't count as a datacenter, Timmy.

You can't expect startups to know and/or use techniques and conventions of tested businesses. I mean you can't blame them though, it's not like people have been doing this forever with Solaris or anything, oh wait.

_donald

I once built a super ghetto solution with a task sched task that was waiting for a specific usb drive to be plugged in and sync toy. There was no network there and the user used floppies before, so I guess it's good that it's so physical.

But yeah, doing that more enterprisey on Windows is fucking garbage. Especially, when users are too retarded to store their shit on their home drive.

Well, linux has btrfs, which does snapshots, but it loses data anyway, so, valid point I think.

Well, it actually is kinda hard. Some devs I know couldn't manage their own shit, even if their life depended on it.

just a theory.....
what if the sysadmin didn't "accidentally" nuke the live files but did it on purpose? is it really that farfetched to believe?
gitlab wanted to created their own infrastructure, even got $20 Million to do it, cloud storage providers get pissed, find said sysadmin, pay him a huge chunk of cash to sabotage gitlab and make a statement "without cloud backups/services, you would end up like gitlab"
scare tactic = success for future clients

Nah, it takes a special kind of stupid to fail this hard.

>LE BLAME Sup Forums KEKEKEKEK

holy shit

>>>/reddit/
Also,
>Implying Sup Forums would be allowed on reddit

...

All the butthurt Sup Forumsfags ITT

>No free private repos
>Rainbow mohican unicorn service down logo
>Brimming with shit tier JavaScript framework bollocks
>Brimming with morons uploading their ssh private keys/entire shell histfiles/plaintext passwords
>Brimming with sjws clogging up development with stupid issues and pull requests about "muh master/slave is offensive", "muh gender neutral pronouns", "muh this language is sexist" and then claiming they "are prolific contributors to open source projects"

RealProgrammers® use bitbucket, the git service for real men

>he doesn't remember Webm for github employees
wew

Database backups don't work that way. You don't just dump & tar data from a multiserver cluster ring with data replication and be done with it. Also data doesn't magically reimport itself correctly from a backup.

kinda feels good, desu

Thats stupid. But git is a cool guy and someone still has a copy, right? Not irreparable.

Insane how Linus always saves us.

Yea, not that exactly, but it increasingly does something comparable again if you can into clouds.

Which I guess gitlab can't

en.wikipedia.org/wiki/Vocative_case

> PLEBBIT SPACING

WHAT THE MEME FUCKING FUCKERY ARE YOU EVEN SHITPOSTING ABOUT YOU AUTISTIC FUCK?!

grammar-monster.com/glossary/vocative_case.htm

You probably say that about that rm? I say this or something will happen eventually. Momentary lapses and all.

Azure is apparently trash, is what I see in this.

...

I fucking love BitBucket.
> teacher in my first year in uni tells us about BitBucket
> start using it
> use it for years
> paid 0$
> never had any data loss, reliability or speed problems

FUCKING BASED

>go back to your safe space
Sup Forums is basically an alt-right safe space.

>Momentary lapses and all.
Everyone does mistakes, but come on.
This is an epitome of mistakes.

I only maintain and run a small community with a few services YET I have file backup, SQL backup, server backup, etc. All working.

I mean it's a single shell command to test an archive. And fuck, of course I checked it a few times already.
How can they even hire that guy? Fuck.