Unix philosophy hate thread

"Everything is a file"

Other urls found in this thread:

doc.redox-os.org/book/design/url_scheme_resource/everything_is_a_url.html
github.com/redox-os/redox/releases
,
twitter.com/NSFWRedditImage

That's fine though

That's actually one of the good things about the Unix philosophy. There are other places where the philosophy falls short, not this one.

t. Winlet

Redox (Microkernel Unix-like OS written in rust) is attempting "Everything is a URL."
doc.redox-os.org/book/design/url_scheme_resource/everything_is_a_url.html

Thoughts on this? Better? Worse?

>your bricks are files
>your partitions are files
>your partitions, after being mounted, are files
>randomness is a file
>secure randomness is a file
>null is a file

That looks like a more modern derivative of the Unix philosophy

>standard input is a file
>standard output is a file
>standard error is a file
>pipes are files

based

Question: What should these things be if not files?

>your keyboard is a file
>your mouse is a file
>your memory is a file

You can get releases here and try em in a VM.
github.com/redox-os/redox/releases
It's obviously very rough, but looks promising.

>all devices naturally supported
>fully interchangeable
>just open and read or write
Terrible design. Using half-assed binary blobs to access your devices and data sources and having to support every single one of them individually is clearly superior technology.

>registers

I fucking hate how it's not the year of the linux desktop

next year maybe

Thanks

>the user has become a file

>/usr

You gonna need at least PC and some others, but having general purpose registers instead of operating the on chip memory of fucking retarded. Especially on x86 with it's whopping 4 of them.

Things

2 many layers of sarcasm 4 me?

Binary blobs hidden behind dozen different APIs. Duh.

What kind of things?

Standardizing everything in the operating system to make it all /comfy/ and working together is pretty much the best thing in the Unix philosophy.

The worst is when tar and gzip have to be different programs because compression and archiving are different features and therefore need to be in completely different programs.

But even then it's not really that bad.

Seems better, however the biggest issue is not that, but the fact that both URLs and files have a big namespace problem.

/usr
/home

man gender
>values can be 0 or 1
>can be overloaded in /etc/gender.conf
>If value is below 0 or above 1, unintended side effects can occur.

read from/write to
what's the problem?

Unix System Resources
not user

...

Computer things

That doesn't seem bad. Isn't that what Plan 9 did first though or something similar?

In Plan9 processes are also files.

Can you read them in a reasonable way? Sounds difficult.

Obviously binary executables are files that describe how to build a process, core dumps are files that describe process state.

But there's so much shit going on in a running process that it doesn't seem like making it a file would be useful. Unless you means things like /proc/.../maps, /proc/.../mem, etc.

It's more of a way of communicating with the OS to change certain things about the process. If I remember correctly you can change the quanta of a process by writing to its file.

>/
>/bin
>/boot
>/dev
>/etc
> /etc/opt
> /etc/sgml
> /etc/X11
> /etc/xml
>/home
>/lib
>/lib
>/media
>/mnt
>/opt
>/proc
>/root
>/run
>/sbin
>/srv
>/sys
>/tmp
>/usr
> /usr/bin
> /usr/include
> /usr/lib
> /usr/lib
> /usr/local
> /usr/sbin
> /usr/share
> /usr/src
> /usr/X11R6
>/var
> /var/cache
> /var/lib
> /var/lock
> /var/log
> /var/mail
> /var/opt
> /var/run
> /var/spool
> /var/spool/mail
> /var/tmp

>clearly the best way to organize files. let's also hardcode it into every program so it can never be changed

More like everything issa negro

what is wrong with this

>clearly the best way to organize files. let's also hardcode it into every program so it can never be changed
that's just what happens with any kind of interface at all

>The worst is when tar and gzip have to be different programs because compression and archiving are different features and therefore need to be in completely different programs.
this pissed me off immensely for years before I finally understood this

Literally nothing wrong, however I don't understand the hypocrisy of the same faggots who cry murder when someone mentions OOP that "everything is an object"

some standardization is fine to have. and some software lets you specify folder prefixes if you build it yourself

That's Plan 9, nigga

y tho

"cli is the only acceptable interface"

mips babby, what does this mean?

According to the page, here's how it's different, and this might give some insight into why they did it:

>With "Everything is a file" all sorts of devices, processes, and kernel parameters can be accessed as files in a regular filesystem. This leads to absurd situations like the hard disk containing the root filesystem / contains a folder named dev with device files including sda which contains the root filesystem. Situations like this are missing any logic. Furthermore many file properties don't make sense on these 'special files': What is the size of /dev/null or a configuration option in sysfs?

>In contrast to "Everything is a file", Redox does not enforce a common tree node for all kinds of resources. Instead resources are distinguished by protocol. This way USB devices don't end up in a "filesystem", but a protocol-based scheme like EHCI. Real files are accessible through a scheme called file, which is widely used and specified in RFC 1630 and RFC 1738.

instead of clearing a register by storing 0 in it, clear it by XORing it with itself

AVR does this. the CLR rd instruction generates the same machine code as EOR rd, rd

>instead of clearing a register by storing 0 in it, clear it by XORing it with itself

i guess you don't have to do as many writes? but idk

The only thing is that while a lot of things that shouldn't be files are files, there are a number of things on the system level are not files, and the whole everything is a file doesn't translate into application level abstractions. If you're in a linux shell you have a filesystem, but you also have variables, array variables, pipes, sockets, exit codes, a whole bunch of concepts which don't live on the filesystem, except maybe you can find sockets somewhere in the file descriptor list.

What's interesting is that, in some ways, powershell does this a bit better with "it's everything is an object system". Everything is a first class data type and commands are a little more flexible in what they accept. For instance:

ls env: - ls will list all the environment variables since it lists all collections, not just
cd env: - Even though you'd never want to do this, the current directory will be the root environment variable
ni foo -value bar - Even though you'd never want to do this, you can use the same command that creates files and directories to create. After all, what's the point of having everything be a file if you cant treat them like a file?
cd alias:
ni touch -value ni

Now I don't really mind the linux approach, but if you're going to call it a "philosophy" you'd better not be half assed about it.

It's a good thing, moron.

but in a 5 stage, its the same amount of stages, right? Do I just need to read the assembly equivalent of SCIP?

That's a good thing you idiot.

>The worst is when tar and gzip have to be different programs because compression and archiving are different features and therefore need to be in completely different programs.
That's a good thing I think

I'll bite, I think everything-is-a-url is stupid

In everything-is-a-file there is a common root for everything.
In everything-is-a-url there isn't
Everything-is-a-file simplifies the kernel (or kernel server/service interface if we are talking about microkernels) interface requiring only the most basic/common file-open, file-read, file-write, file-read-seek, file-write-seek, file-close, etc interface and as a result simplifies the design of the user programs.

In everything-is-a-url which is the common ancestor? object://? please.
What's the commonality on file://, , ssh:// etc? Looks to me like a recipe for a blow-out of servers and copying of boilerplate.
What if I link an http:// resource to a file:// node? Are all the file operations on a file supported in http? What assures that?
Nothing as I can see. It just complicates the interface and makes it more easy for bugs to creep in (see Windows)
And what about file overlays? Can I overlay an http over file? An ssh over file? I don't see how. In everything-is-a-file this is trivial.
Everything-is-a-url makes everything unnecessary complicated and concrete which is in my opinion a very bad thing for an OS.

>but it's like plan9
I though that plan9 supported only a 9p protocol.
So in plan9 everything was really a file with a common interface, more unix that unix.
Adding therefore http, ssh, pop3, imap, etc was an exercise in writing an whatever-to-9p translator and then you use the common 9p interface.

In other words (and no offence to the developers of redox) but I don't get what is it about. It looks like a big oh-it's-in-rust-so-it's-soo-secure meme.
I'd rather use and experiment with minix and *bsd which have design goals than redox which it's only goal seems to be write-it-in-rust-at-all-costs.

I mean, desktops themselves are kinda going the way of the dinosaur now except in business establishments, so I think we'll just have to make do with Linux being the dominant OS for basically everything else.

Yup, OP be like "HURRR DURRR!!!! I know nothing about Unix, so I'm just going to parrot the only thing I remember from the second session of my A+ class where we talked about different OS families."

nice

i know its not a unix thing but a posix one but still
$((3 + 5))

>Using half-assed Kernel blobs
KEKISIMO

"Just learn the command line"

>I mean, desktops themselves are kinda going the way of the dinosaur now
Meme more.

It's Linux evangelists' fault for insisting that the only people averse to learning Linux are good for nothing lazy brainlets

When in reality people have shit to do besides taking 7 hours of their day to learn how to use an operating system and then another seven hours trying to get it work.

The year will never come until GNU/Linux is more accessible to non-tech people

Exactly. Smarthphone and mobile in general OS's have so little functionality that there's no way the desktop is being upended or abandoned anytime soon
There are people who do actual work with their """computers"""

thanks .doc

virgin unix philosophy: everything is a file
chad NT philosophy: everything is an object

Post more assembly memes

NT Philosophy: everything is a registry key

>"Everything is a file"
descriptor. Everything is a file descriptor.

Different concepts.

>everything is a file
>pic related blocks your path

>everything is a binary digit

sockets are files

Everything-is-a-url is literally the opposite of what Plan 9 did. Plan 9 solved the common root issues with namespaces and their mountpoints, fileservers in userspace and lexical filenames, while Redox just quited the file hierarchy concept being dorm of everything and added way more interface than just file.

tee bee hetch I think computers are going to be in the same place in 20 years as C is now, but for efficiency or something rather than speed.

.tar.gz is comfy
it's easy to compose the two operations, so why should they not be different?

x86 is cisc and has variable instruction widths. mov is larger because it requires an immediate operand.

tar xvf considered harmful?

self-XORing/SUBing is (in CISC like x86) a shorter instruction, doesn't involve shunting a value from the instruction pipeline to the ALU, and is often given special meaning in register schedulers/renamers in OoO core, where it boils down to a no-op and helps prune dependency chains for other instructions' scheduling.

Get on my level faggots.

> everything is a file
> network sockets should be files (byte streams)
> wanting to send messages reliably over commonly allowed protocols requires doing in-band message segmentation. every time.

UNIX philosophy permanently infected and fucked the internet, and sane protocols like SCTP will never have a chance.

microsoft philosophy:
"casuals can use computers too"

File descriptors != files

Also, SCTP doesn't have a chance because of retarded firewalls defaulting to whitelisting everywhere.

True.
Shit thread btw.
Sage.

"everything is a file" does really mean fd, but the platform bias towards all data being a linear (preferably human readable) byte stream gets lumped in there too.

also, how has anyone not mentioned C strings yet? regardless of whether you want to consider C separable from UNIX, the universality of C strings in POSIX APIs is a blight.

>the universality of C strings in POSIX APIs is a blight.
Post examples.

C strings are portable. I prefer programming with Pascal strings but only C strings guarantee representation that isn't platform specific.

of course everything is a file. what do you want everything stored in RAM?

everything is a file on windows too

everything is an url is a performance nightmare for starters

it's a meaningless abstraction that will only serve in practice to create a software bottleneck

Unless nvidia and AMD start writing quality drivers for Linux I'm not just going to set aside my 1080 ti and library of games for Linux

Most games as well simply don't run on it.

I have my laptop running Debian and that works for me, but as a desktop forget it.

>Most games as well simply don't run on it.
Literally nothing to do with the driver and everything to do with DirectX / OpenGL

t. CUDA developer on Loonix

>it's hardcoded
>people using this word when they don't know what it means

it's not, you can compile your kernel with different folder options, that aside you can just use system links that point to those folders.
even if that wasn't the case the folders aren't constants, their names are writable.

people need to fuck off using this term 'hardcoded' it's wrong in 99% of scenarios you idiots use it in

>No examples posted.

>x is not hardcoded
>goes on writing about stuff that is decided at compile time
what did he mean by this

Then why does DirectX and OpenGL seem to be perfectly fine with Windows drivers then?

The windows drivers have no problem implementing all the calls/libs for dx and opengl, yet between the amd open source drivers and the proprietary drivers they're worlds apart in performance and stability. I think circa 2014/15 my AMD 7850 simply didn't have drivers for the latest Ubuntu distro. I would've said maybe DirectX is a strong platform but at the same time windows drivers seem to have no problems/issues with opengl stuff too

libraries or applications.

>arbitrary decision at compile time != fixed
>what is mutable vs immutable
it's very simple user, if you can change it then it's not hard coded. it's just coded.

>muh data abstraction
you really nailed it user, why is there a boner for this shit recently? it's like a meta 'goto' for data

Hardcoded doesn't mean you can't change it, it's just the boolean opposite to data driven.

Give me an example please of the boolean opposite.

Regardless, compiling a kernel or Linux distro is data driven. The config data being either compile time parameters for custom folder names etc or system calls to change folder names ('eg sudo mv etc'). Data supplied by the user or dev changes the behavior of the system.

how about sun_path in sockaddr_un? basically an irrecoverable API bug since the early 80s?
but if you can't realize that C strings are cancer in general, you're already beyond saving.

>including links

>sodium
???

Your mom is a file so big it gave an error writing because it took up all disk space.

>worse is better