Tfw you drop every table in the database by mistake

>tfw you drop every table in the database by mistake

>tfw you cloned the DB and ran your tests on that

>tfw you fuck up the database
>tfw you realise you forgot to disable the copy-on-write functionality of the servers filesystem when making the database
Two wrongs make a right.

The day my job laid me off... i went back to my computer and deleted the hard worked database that only I had access to edit and that everyone was using everyday.... Made sure i shredded it with some shitty shredding program so they can't recover it.

Then I left. Collected unemployment checks for like 6 months before I decided to get another job.

Post more neko

I want to fuck Coconut till her anus and cooch are bleeding.

...

I bet you make regular offline backups too, you responsible human being.

>reasons you should never do queries on production.

>being that retarded

kys

DATABASE DATABASE

fucking weabo

How did you not get sued?

Nice.

Learn to use transactions, you won't make the same mistake again.

sauce

Nekopara

thx cunt

Be careful, of course.
>copy live to test
>work on test
>whoops, broke it
>np, copy live to test
>fine

If you're incautious, this is how it works:
>copy live to test
>work on test
>whoops, broke it
>np, copy test to live

>ohgodnoNONO
>well, shit
>pray to the gods of backup

Of course, that runs like absolute shit. To avoid that, on btrfs:
# chattr +C /var/lib/mysql
Remember it only takes effect for files that are newly copied in there, not for ones already in there.

I think you're fucked on ZFS, or rather you have to mount differently or submount?

Binary logs and/or incremental/daily backups (amend to whichever scenario works best for your use case) are the way to go there. Also, transactional rollback. COW is not your friend for things like VM images and DBs - databases have their own magic for this that works better.

Until it doesn't. Back up.

>going to remove all those pesky emacs autosave files with the ~ on the end
>rm *~
>hit enter too early
>there is no ~
>rm *

next time google extundelete

>tfw your database is too big to duplicate
>there is no nonprod or DR
>all changes are in production
>millions of dollars of data are mined from it

we live in exciting times

fuck this art style is so bad

>he doesn't like asanagi

why not turn them off or set them to save in a temp directory you fucking noob?

If it's to big to be copied you are using the wrong hardware try again user.

300TB oracle DB on 10g network to SAN storage.

We hit the SAN port something like 500,000 times a second. If other people are on the same SAN storage as us they experience performance degradation.

I want to believe

did they not even backup?

>Of course, that runs like absolute shit. To avoid that, on btrfs
I know *how* to avoid it, but I had forgotten to avoid it.

not having a backup san and adapter so you can backup your bullshit data

Welcome to corporate America, where we specialize in minimum viable product.

300tb of unbackupable data

wtf is it? something you can regenerate? or sales and customer shit?

why is 300tb on one server? why isn't it spread amongst many servers with at least 2 copies of each record?

is it a company I would have heard of? some gov shit?

shaking muh hes fambimaly