I fell for the meme
What do I have to do to ensure longevity when using linux?
I know about leaving 10 gigs of unpartitioned space, enabling TRIM weekly, noatime in fstab and using hdparm to reset the cells.
What else? Is using a swapfile better than having a dedicated partition?
I fell for the meme
did you read a guide on how to setup an SSD from 2009? Google really needs to stop ranking that shit so high.
Install windows 10
Lincucks is not designed around ssd
>Lincucks is not designed around ssd
why hasn't linux found out how SSDs work yet? I thought there were potentially millions of people working diligently to create the best OS known to man? Are they just not smart enough?
>Look mommy, I posted a lie!!!
Seriously OP, unless you're using something old like XP, you really don't need to do anything. Unless you're one of those types who wants to use the same drive for a decade then then read/write wear won't be an issue until you replace that drive anyway. We've come a long way from the times when the average maximum read/write cycles were in the single digit 1000s...
This is my first SSD so I really have no idea what I'm doing. I'm a pretty experienced lincuck though so at least I'm not totally clueless.
Does the kernel take care of all this stuff automagically now?
I know that TRIM usually isn't on by default but you can use a systemd service to enable it rather than typing in the command every week or setting up a cron job
Yes it is. Linux 3.2 and up have support for various classes of SSDs. With systemd and smartctl you can do everything you want on an SSD.
Of course you're a desktop user. Enjoy Windows 10 you outdated basement troll.
Anyone who says Linux is for cucks is a huge fucking loser, not even kidding, huge fucking jokers who go to community college and are wastes of oxygen
>Windows
>literally sending your data to India for storage and analysis
>not being cucked
Hey at least you get to play games lol you ugly unemployed piece of shit, while my app deployed on Linux affords me cars and houses and gets me laid
>What do I have to do to ensure longevity when using linux?
Nothing, assuming you got an SSD that isn't shit
I've been using my SSD 24/7 for the past 2 years and I still have 99% of its write cycles left
You can literally write petabytes to them before they'll die (Samsung 850 Pro)
btrfs with ssd and trim options is the solution
Well this is a "lightly used" intel 530 240gb.
I know I know
>buying used
It was 50 bucks shipped and seemed like a better deal than buying a new 120gb crucial or something.
And yeah I'd like it to live as long as possible
It supports SSDs fine, stop responding to trolls
But Linux distros are all OS designed to be left alone in a dark cramped, dusty, hot and noisy place until it dies and replaced.
Swap file sucks, use a partition, and put it on a slow fucking part of the disk because you never want swap
A swapfile that's equivalent to your ram size and setting swapiness to 1 is the best way to use swap on an SSD or even a HDD.
Having no swap is retarded given how garbage modern browsers are. You can have firecuck freeze your entire system just because you accidentally opened a JS heavy site and it decides that you need 5 gigs of RAM just to watch gay porn
>You can have firecuck freeze your entire system just because you accidentally opened a JS heavy site and it decides that you need 5 gigs of RAM just to watch gay porn
You seem to be describing the situation with swap enabled
With swap disabled, firefox would just get killed when it exceeds its ram limit
What? No it would take at least 10 or 20 minutes of complete system freeze before that happens.
Try it yourself
>swapoff -a
and have your browser max your ram.
>enabling TRIM weekly
It doesn't stay on in loonix?
Continuous TRIM is bad for drives
>Anyone who says Linux is for cucks is a huge fucking loser, not even kidding, huge fucking jokers who go to community college and are wastes of oxygen
>I'm an unemployed basement dweller, the post
[citation needed]
Last I checked (and admittedly this was years ago) but it was just a trash collection process and doesn't cost any write lifetime.
You worry too much. How many people have you read about (since you obviously don't know anyone outside your basement) that has actually had an SSD fail due to write wear?
All the failures you have ever read about are probably spontaneous failures (controller dying generally), much like a disk drive spontaneously dying.
I just want this thing to live as long as possible
I'm fully occupied with school and work; as such I really don't have time to buy new drives, wait for them to come in and reinstall my OS and data.
Hahaha fuck you family for real you're just a fucking no penis fat titted beta male whose only consolation afforded in life is the ability to play vidya and send analytics to bill gates
Disgusting piece of shit, good luck every getting a job not knowing anything about Linux and thinking it's all about installing gnome instead of kde you retarded feeble prolapsed asshole
>I know about leaving 10 gigs of unpartitioned space
What.
>implying reliability is bad
I'm positive you're confusing things
...
Currently allocating 161773 MB
Currently allocating 161774 MB
Currently allocating 161775 MB
Currently allocating 161776 MB
Currently allocating 161777 MB
Currently allocating 161778 MB
Currently allocating 161779 MB
zsh: killed ./oom
./oom 1.79s user 2.52s system 98% cpu 4.359 total
$ cat oom.c
#include
#include
#include
#define MEGABYTE (1024*1024)
int main(int argc, char *argv[])
{
void *myblock = NULL;
int count = 0;
while (1)
{
myblock = (void *) malloc(MEGABYTE);
if (!myblock) break;
memset(myblock, 0, MEGABYTE);
printf("Currently allocating %d MB\n", ++count);
}
exit(0);
}
With swap on it would first stall my system for 10 minutes while churning through swap
That's the exact oppsite of what happens to me.
The HDD on my dankpad is dying (hence the SSD) and as such if swap is being used my system slows down to a crawl.
So naturally I tried turning swap off, and whenever palemoon would fuck my RAM usage up my entire system would freeze until I force it to shut down.
I don't use furry shit browsers so I can't reproduce your results
Maybe it's not actually running out of memory but just trashing its internal cache
Same happened with firecuck
OP here
So far I know I should:
>use swapfile instead of a partition and reduce the swapiness to 1
>adding noatime to the fstab
>disable caching in the browser
>use ext4 as a filesystem (I already do that anyway)
What else should I be doing?
Also how do I wipe this thing when it comes in?
Reset everything with hdparm?
This is an intel 530 240gb if it matters.
firefuck and furryshit are the same browser
>What do I have to do to ensure longevity when using linux?
You don't really need to take any active measures or specialised setup anymore.
>leaving 10 gigs of unpartitioned space
Unless you plan on keeping the filesystem 100% full a lot of the time, don't bother.
>enabling TRIM weekly
Haha, what?
>noatime in fstab
Access times aren't going to make a huge difference, but if you're of the view that every little bit helps, go for it.
>using hdparm to reset the cells.
Isn't that only when you want to securely erase the disk? For day-to-day use don't bother, securely erasing all the time will use up write cycles for no particular practical gain.
>Is using a swapfile better than having a dedicated partition?
In terms of SSD longevity, not really.
>Access times aren't going to make a huge difference, but if you're of the view that every little bit helps, go for it.
Isn't the problem write amplification?
What I meant by using hdparm to reset the cells is wiping it when it comes in
This is a used disk
Every mkfs program worth its own shit in weight will issue a full device TRIM to clean up the partition upon filesystem creation
>meme
Stop with this fucking shit you cunt.
everything is a maymay if you spread your asscheeks wide enough
>literally sending your data to India
[Citation needed]
Data Microsoft collects doesn't leave the country it's collected in 99% of the time.
>while my app deployed on Linux affords me cars and houses and gets me laid
What kind of teenage shit did I just read?
>Isn't the problem write amplification?
Yes, and still not all that big a deal. But it's a trivially easy measure with no real downsides, so I'm not advising against it, just saying it's not an absolute necessity.
If you're going to be repartitioning and formatting it anyway (which you will be, unless you're buying it from another Linux user who's using the same filesystem you want to), not much point.
>If you're going to be repartitioning and formatting it anyway (which you will be, unless you're buying it from another Linux user who's using the same filesystem you want to), not much point.
I've read performance will decrease over time until you reset the cells.
Is that a meme or what?
Absolutely nothing.
They cannot be killed unless you dd it hard for months.
Performance will decrease if you run out of untrimmed cells
trimming the entire disc undoes it just as well as resetting would
Gotcha
Ok so here's my to do list when it comes in. Tell me if I'm being retarded pls
>use smartctl to check health.
>use hdparm to reset cells
>Format and make my /, /var, and /home paritions, all in ext4 and adding noatime to fstab after installation
>Leave 10 gigs unpartitioned in case the drive fills up
>install distro
>make swapfile & set swappiness to 1
>enable weekly trim with systemd
>disable browser caching
Anything else? I want this drive to last at least 6 years. I just have no idea if this is an unreasonable expectation.
The drive I have in this x201 has been going since 2005 and it's just now starting to have performance issues.
>Format and make my /, /var, and /home paritions, all in ext4 and adding noatime to fstab after installation
Why separate partitions if they're all on the same drive? Why ext4 specifically, as opposed to XFS or BTRFS?
>Leave 10 gigs unpartitioned in case the drive fills up
That is legit retarded.
why ext4?
>Leave 10 gigs unpartitioned in case the drive fills up
the only thing this will do is make the drive fill up even faster, because now you have 10 GB of unusable wasted space
>make swapfile & set swappiness to 1
I honestly wouldn't bother with a swap file in 2016 unless you cheaped out and got less than 16GiB for some reason
>disable browser caching
Pointless. All this will do is make your browser slower
Why? Not planning on upgrading your computer for 100 years?
separate /var because some package managers can be retarded
and separate /home because having your home and / on the same partition is pretty dumb
Does drive performance plummet when the drive is full? leaving 10 gigs is recommended no?
ext4 is just the best all around file system and happens to be the best for SSDs
This machine has 4 gigs of RAM and I don't need/plan to upgrade.
Doesn't browser caching cause a ton of unnecessary writes?
Nope. This is a laptop by the way. X201
fsutil behavior query DisableDeleteNotify
post output
The drive didn't come in yet sorry
What is that supposed to show?
>separate /var because some package managers can be retarded
I've never dealt with any so retarded that having /var on the root filesystem fucks things up.
>and separate /home because having your home and / on the same partition is pretty dumb
Not really. Set space quotas if you're worried about user data filling the filesystem. For the most part, splitting a drive into different partitions wastes space for not much practical benefit. Dual booting is about the only good reason for it.
>Does drive performance plummet when the drive is full? leaving 10 gigs is recommended no?
Performance doesn't drop much, but if you're writing to disk a lot when the disk is near full, it doesn't give wear levelling much to work with, so can use up write cycles on some areas of the disk more than it otherwise would. But this only really becomes a problem if you're doing that for a fairly long time, so all you really need to do is avoid having less than 10GB free space as much as practical. Setting aside an area of disk to never be used is dumb.
oops you are on Linux, I thought you were a windows user. Anyway:
hdparm -I /dev/sda | grep TRIM
Try it and see what the output is.
But seriously, put it in and forget about it. Chances are it will outlive your build.
>not having seperate encrypted /var, /swap. and /home partitions
Are you even trying to Linux?
>ext4 is just the best all around file system and happens to be the best for SSDs
[citation needed]
How can you possibly manage to stuff so much misinformation into a single post?
>separate /var because some package managers can be retarded
What the fuck? “some” package managers? do you or do you not know what system you are going to be installing? And why should that mean making it separate?
>and separate /home because having your home and / on the same partition is pretty dumb
I think having them on separate partitions is pretty dumb, personally. What exactly are you worried about? Quotas? You can enforce them on higher levels than the partition
>This machine has 4 gigs of RAM and I don't need/plan to upgrade.
There's your problem right there. But have fun swapping I guess
>Doesn't browser caching cause a ton of unnecessary writes?
Yeah no shit, but who cares?
>Performance doesn't drop much, but if you're writing to disk a lot when the disk is near full
More importantly: The 10G free space won't help him if he manages to fill up the (10G smaller) partition to the limit, because now his filesystem will constantly trigger exponential worst case relocation algorithms, causing significantly more writes and much worse performance than if the idiot had just left the 10G part of the filesystem
idiocy can't be cured
>not using zfs, btrfs or another filesystem that supports soft volumes, snapshotting and quotas
What the fuck are you even doing on Sup Forums
/home is on a RAID, swap is taken care of by swapspace. What's the point of /var being its own partition?
You're the one spreading misinformation around here.
Sometimes pacman, apt and dnf can fucking flood your var directory with old packages and other stupid shit. Having it on a separate partition will not only isolate the problem if it happens but will also protect your / partition from having stupid shit on it or filling up.
>OS fucks up somehow
>Lose all my data because I didn't take the two seconds to make a separate /home partition
4 gigs of RAM is plenty.
unnecessary writes will decrease the life span of your SSD.
>Sometimes pacman, apt and dnf can fucking flood your var directory with old packages and other stupid shit.
Dunno about pacman or dnf, but apt has a command for clearing old packages from its archive in /var, which is trivial to set a cron job for if you can't be bothered clearing it whenever you do an update.
They all do. It's not a good idea to constantly delete old packages on a rolling release system though, in case you need to roll something back.
Any space issues will arise during an installation though, so you can't really do anything about it. keeping /var in a separate partition is just a better idea overall.
>Sometimes pacman, apt and dnf can fucking flood your var directory with old packages and other stupid shit.
[citation needed]
>>Lose all my data because I didn't take the two seconds to make a separate /home partition
Why would filling up your drive cause you to lose all your data? I'm not even sure I want to know at this point.. I'm sure you also don't make readonly snapshots or backups
I believe in redundancy and keep several backups
There's literally no reason not to have a separate /home partition. I'm not just talking about SSDs here.
apt has two versions of the command; one for wiping the archive entirely and the other for only deleting the ones that no longer appear on the repository. I don't use a rolling release distro; do they not keep a version or two back in the repositories as well as the most recent?
I'm still not seeing why it's a "better idea overall", though. Aside from not wiping your apt archive for years, what else could cause any real problems?
I just have a symlink from /var/cache/distfiles to a subvolume in my bulk storage pool. No need to overkill and make all of /var a separation partition..
Besides, you haven't explained how that's “flooding”. Packages just slowly collect over time. It would take maybe a few years for it to fill up - you make it sounds like it happens in a split second, to the point where you wouldn't have noticed it long before it ever became an issue. (You have disk usage monitoring and warning emails set up, right?)
>There's literally no reason not to have a separate /home partition.
How about “not wasting space” for a reason?
Literally the ONLY benefit I can think of for having /home on a separate partition is so you can share it between multiple distros easily, if you're a distrohopper.
It can and has happened user.
Your solution is also a good one. If you have a one machine/drive setup though I would still go with a /var partition. The peace of mind and cleanliness is worth the extra 3 seconds in fdisk.
You have no clue what you're talking about. Just shut up.
>The peace of mind and cleanliness is worth the extra 3 seconds in fdisk.
But it's not *just* a little more time and effort at install time. When you find yourself needing more space in, say, /var to install or do some particular thing than you've got the space for in that partition, what then?
When are you ever going to download more than 10 gigs of packages at once?
Never, that's when
You keep going on and on about how this only takes “3 seconds” but completely neglect the fact where using partitions to poorly simulate disk quotas requires perfect knowledge of exactly how much space you are going to need for each partition, which is impossible to predict
What if 2 years down the line, you decide to install a bitcoin client that dumps all of its state into /var/lib? What if you end up using a database that goes into /var/db?
Doing it at the whole /var level is just retarded, especially since you have important shit like logging stuck in the same easy-to-accidentally-fill filesystem as stuff like caches, packages, databases and shit.
Seriously, this is fucking 2016. Get with the time and stop using outdated filesystems that require you to solve impossible problems at the fdisk time and just use something modern like zfs where you can apply soft quotas, fine-grained subdivision of the path, read-only snapshots, usage monitoring, expand your pool later on, etc.
You are just shooting yourself in the foot by thinking you are being smart for artificially limiting yourself with no benefit..
>When are you ever going to download more than 10 gigs of packages at once?
That's a better case for "/var doesn't need to be a separate partition" than it is for "it must be kept separate so it doesn't fill root".
is wording all of this better than I am.
>You have no clue what you're talking about. Just shut up.
Yeah, and you still haven't demonstrated any benefits other than a naive way of protecting yourself against one particular type of software failure with very limited repercussions anyway. (Disk's nearly full? Delete something. Don't want to constantly check? Install monitoring)
If you make a separate /home, / and /var then you have to accurately predict *in advance* how much space each is going to use 5 years down the line - underestimate and you're in for much worse problems than what you're trying to prevent, and since the uncertainty is high the only logical thing to do is to grossly overestimate, which wastes space for no benefit.
Again, this is 2016, use soft quotas or limit the cache size in your package manager or whatever else you're worrying about. Partitions and ext especially are just so inflexible that you're setting yourself up for way worse problems by going down the separate-partition line..
I personally have /home on a separate subvol of the same FS but *only* so I can make read-only snapshots with a different frequency compared to the base system (and also store them separately). But I'm only doing this because it's 2016 and I can use a modern filesystem that's capable of virtual subfilesystems.
If you have a modern SSD, here is what you need to do.
>Put whatever you want on it and don't worry about longevity at all
Done.
>>Windows
>>literally sending your data to India for storage and analysis
>>not being cucked
You do realize that Sup Forums does the same exact thing, right? It's one of the bigger things that kept people on 4+Sup Forums after gg died down.
>You do realize that Sup Forums does the same exact thing, right?
Except Sup Forums only has whatever shit I choose to post here, not fucking everything on my computer.
this
I have 79 P/E cycles recorded on my Samsung 850s after 17k powered on hours (~2 years of 24/7 operation), and 11972896888 LBAs written which translates to about 6.1 TB of data.
So to summarize:
>After 2 years = 6.1 TB written, I have consumed 79 P/E cycles
>The drive is rated for 6000 P/E cycles minimum by the manufacturer
>This means the drive is expected to least until around 2162, by when I will have written 463 TB of data.
In 2 years of use, I have just barely consumed 1% of the drive's *rated minimum*.
To use up all of the P/E cycles within the drive's 10-year warranty period, you would have to be writing 100-200 GB per day. I want to see anybody on Sup Forums accomplish that.
It's not as big of a deal as you think.
You're probably going to fry it faster by not using more or the apace.
If you're super worried, put swap on a hard drive. I do that because of worry and just keeping space free because my ssd doesn't need to be a swapspace.
This. In all likelihood, the average user - even here - will replace the drive because they want more space way before the disk starts wearing out from too many writes.
I would but an X201 only has space for 1 storage medium.
If the X220 didn't have such an inferior build I would have totally gone with it. Oh well.
Care to explain to me what the difference is between a 30gig / partition and a 20gig / with a 10gig /var partition setup?
The latter is just cleaner user. I don't want pacman or apt touching any of my root directories.
ext4 is still the best filesystem until btrfs can be 100% stable, and even then ext4 still wouldn't be a bad choice.
I've been doing this magic feat of predicting partition sizes in advance for over a decade now with not a single problem.
I do however remember fedora fucking the entire root partition after an upgrade in 2010 when I was in college, luckily because I kept a separate /home I didn't lose any files that I had yet to backup.
I didn't say snapshots and all that aren't great. To each his own.
The "older" way of managing filesystems just happens to be cleaner for most use cases.
>I don't want pacman or apt touching any of my root directories.
I think you might not be too clear on what a package manager actually does, user.
I didn't word my post too well. I don't want it storing anything on my root
I understand my typo. pls don't roast me
>I don't want it storing anything on my root
But why? If it's capable of fucking up epically enough to fill your hard drive by putting too much shit in /var, you shouldn't trust it enough to have it on your system at all. It still has write access to the root filesystem, so if it really fucks up that badly, /var being its own partition won't necessarily save you.
>Care to explain to me what the difference is between a 30gig / partition and a 20gig / with a 10gig /var partition setup?
You can use 25 G for / and 5 G for /var in the former setup.
In the latter setup you couldn't, your / would have run out of space before then, even if the /var space wasu nused
>I do however remember fedora fucking the entire root partition after an upgrade in 2010 when I was in college, luckily because I kept a separate /home I didn't lose any files that I had yet to backup.
Relying on a partition to protect yourself from an accidentaly rm -rf or dd seems flaky at best to me.
If you have a real snapshotting/backup system in place, none of this would even matter. Why are you wasting so much energy on sticking to the past?
If you want some paths to be off-limits to your package manager, then you're using the wrong tool for the job. Use SELinux, jails, AppArmor or any other sort of MAC to prevent it from accessing files you don't want it to access if you're this anal about preventing your package manager from accidentally deleting your home dir
Or I can just have a separate /var
>using jails for anything ever
Again, I agree with you that snapshots are great. There's still no reason not to have a separate /home even if all your stuff is backed up.
Having to recopy configs and what not if you change a distro is annoying as fuck, especially if you have a lot of machines.
This machine has had an Arch install running on it for 2 years now, upgrading every week or so. I was using MATE, then Plasma 5 and now GNOME. 2130 packages installed.
Do you know how full my / partition is?
>/dev/sda1 25G 13G 12G 53% /
And that's with a shit ton of crap that needs to be cleaned and NOT having a separate /var
Even fucking slackware wont use up more than 20 gigs on a full install.
My point is you wont run out of space
Can we go back to talking about extending SSD life?
>Can we go back to talking about extending SSD life?
Sure, as long as the OP accepts that separate partitions for things like /var are pointless.
>>/dev/sda1 25G 13G 12G 53% /
Wanna know how full my / is?
/dev/sdh1 2.1Tb 823Gb 1.2Tb 41% /
That's right, this is why I don't need to give a shit about partitions and “oh what will happen if something downloads 3 packages at once?”. I don't have a potato for my /
now I know why you obsess so much about preventing your FS from filling up..
>I know about leaving 10 gigs of unpartitioned space
Is this some hot new meme I don't know about?
>This machine has had an Arch install running on it for 2 years now, upgrading every week or so. I was using MATE, then Plasma 5 and now GNOME. 2130 packages installed.
Okay, now drop some fat binary package into /opt
Like android studio or whatever
Hell, I've got a Raspberry Pi with a 16GB root fs that's been going for years, no separate /var or other shit like that, not even come close to filling root. "But /var might fill up root" is worrying over fucking nothing.
If you know you're to use big binaries then just size your / accordingly.
A 50 gig root partition will cover 95% of use cases. If you're part of the other 5% just make a 100 gig root partition
I don't understand what the problem is
And when one of your other partitions fills up and there's tens of gigabytes of space doing nothing on root?
>50G root partition
and by your metric, you've just taken 40G of useful space away from your home folder
Ok lets assume this is a 1tb drive
50 for /
10 for /var
The rest goes to /home
In my case with a 240 gig SSD
30 gigs for /
10 for /var
and the rest for home
This gives me a comfy 200 gig home partition for files, my OS has plenty of room to breath and the package manager has more space than it knows what to do with.
You only need the discard mount flag to enable trim automagically