This made me think, why are we still coding in plain text?

github.com/torvalds/linux/pull/437

1. your pic
welcome to the naivete of the average coder
2. your title
because you touch yourself at night

This is our future bro.
Next gen of devs working and maintaining the Linux kernel will be webdevs, you have to embrace the new style of coding.

>why are we still coding in plain text?
I thought someone made a language of nothing but emojis

>she doesn't code exclusively in Labview
Pathetic.

Because most code is just referencing different identifiers declared in many different places in the program.
In order to do this visually, you would need a lot of visual representations of wires connecting everywhere and this would become unreadable after 10 lines of code.

Flat text lets you do this in a readable way.

Hi my name is Jason. I'm a political science major who has always had a "nack" for computers. I've been coding in html since I was a kid. I currently consult for the Node.js foundation as well as do freelance web development for major corporations you might recognize. My tools of the trade are a macbook air, bootstrap, and Atom.

>!= 0

i'm so fucking mad

duno men i think its still possible to implement a binary alternative to plain text, just think for a moment of how inconvenient all those spaces and tabs and line breaks are.
If you aren't a vim or emacs guy making those word jumps and editing code inside parenthesis is fucking slow.

First, we want to
establish the idea that a computer language is not just a way of getting a computer to perform operations
but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must
be written for people to read, and only incidentally for machines to execute.

>why are we still coding in plain text?
do you *really* want to use unicode, which can have several symbols looking almost the same, but which can have different effects in the code? do you *really* would like to debug such a program at 3 am on friday?

That pajeet got BTFO

>implying that plain text is not a binary format

ok lets go!

unicode restriction because it causes pain in the belly
So then what type of encoding should we use? or should we make our own?

Quinticentially webfaggot

>code in LabView
>never get fired because no one else will have any clue on how to make sense out of the code
nice

Because ASCII is 1 byte while UTF is 2 or more?

>19 days ago
How have the Github thought police not killed him yet?

because Linux devs actually work and have a code of how to code not how to conduct like the retards on systemD

UTF-8 is 1 byte (ASCII compatible part) to 4 bytes

Why? What's wrong with it?

>This made me think, why are we still coding in plain text?
There's plenty of non-plain text coding, and has been for years. Just look at dreamweaver. Lego Mindstorm robots also offer a graphical 'drag and drop' interface alternative to typing in the c code manually.
If you're still coding in plain text there's two possible reasons why. 1. You find plain text to be more efficient for what you're doing. 2. Nobody has made a symbolic interface that is broad enough and concise enough to fit your needs.

There's really two ways to go about implementing a symbolic coding interface.
You can either tie graphical abstractions to 'plain text coding'. This adds another layer of abstraction, but it's comparably easier to use. This would be likened to being forced to code entirely in function calls made by somebody else.
The other way is to tie the graphical abstraction to a more lower level abstraction. This can be done in many ways, and offers you more control than the other method. Maybe you don't want to use a character to represent a pointer, and instead want bolded text to represent a pointer.

This makes me think, how much effort would it be to alter a C compiler such that it treats bold, italic, and standard text differently; allowing you to use variable names that read the same but are separate?

UTF-8 graphemes are variable length. They are not all 4 bytes, they're just up to 4 bytes

A purely ascii text in UTF-8 is exactly the same size.

>not inventing your own comfy code style
plebs

It's completely superfluous, the same as writing if (condition).

this is disgusting

...

>uglifyjs
wow, they named their obfuscation algorithm.

vile

>It's completely superfluous

It's not superfluous. Yes, it may mean the same to the compiler, but it's sloppy work that invite bugs.

Making 0 falsey in any language was a mistake.

>This makes me think, how much effort would it be to alter a C compiler such that it treats bold, italic, and standard text differently; allowing you to use variable names that read the same but are separate?

duno try that new compiler, miabe you find out how

what is diz

>I just reinvented Microsoft BASIC
Learn some history. And why the hell would any human want to code in it directly?

You must be a Lua programmer.

Is this every web developer ever?

>const recurse =
setImmediate(worker)
oh shit nigger what the fuck are you doing

no bully me

>You must be a Lua programmer.
Never used it

Oh shit, a bug. This should have been a=()=>function.

ES6 in my way. Const whenever possible. Everything should be a separate block. Indent visually. Do not overextend lines, try to make code as wide as your eye can read by simply moving down. Think about how your eyes will move when reading. There are only `` strings, '' and "" only when nesting. No semicolons allowed. Functional approach whenever possible. Do not think about code efficiency. Async through promises. Never miss an opportunity to add a )=> smiley or something so you will have something to giggle at when you go insane from working too much.

Thank you for confirming yet again that "mks me dink xDDDD" retards should be gassed.

you should give elm a try, you might like it ;)

this looks like f# if it was developped by oracle

>binary alternative to plain text
>how inconvenient those spaces and tabs are
Those are actually there for the user's convenience and are entirely optional in most languages.
>vim or emacs
If you write programs, you should use a programming-oriented text editor, or even an IDE, that indents your code for you. Vim and emacs aren't the only ones in existence, there are others without the steep learning curve, like geany or notepad++, that do the indenting for you, but still use the ordinary shortcuts like ctrl+c, ctrl+s, etc. Still, I would recommend familiarizing yourself with either vim or emacs, since not having to use the mouse is /comfy/, so comfy that I downloaded the vim plugin for android studio to get that functionality.

you're an idiot, you also probably don't understand zero based indices either

Very cool, I knew I wasn't the only one brain damaged. But I'll stay with ES6, it works anywhere immediately. My crap will even work in a browser.

I'm planning to create a visualiser for this style of code one day. It's not hard to chop into parseable blocks. Imagine coding in VR, where blocks of code flow around you all these text =()=> will immediately turn into futuristic neon arrows or something and all these function calls and map-reduces will slowly turn into abstract glyphs the longer you stare at them so they won't clutter your visual field (by their feature-hash so similar functions look similar)...

classic Sup Forums comment, using something completely mundane and trivial as evidence of their superiority. I bet you don't even know there's 8 bits in a byte, faggot. oh shit, you got burned, bitch!

A lot of Javascript files do this. It saves a few kb, and probably a fraction in terms of speed, but with modern technology it's pointless unless it saves a lot of time and speed.

Readability is slightly more important than speed. Obviously if it's significantly faster it should be done, but in this case it isn't.

But elm also works in a browser, that's what it was designed for. Or what did you mean by "(..) ES6 (..) works anywhere (..) even (..) in a browser."

UTF-16 is 2 bytes or more. UTF-8 is 1 byte or more.

There are good arguments for both zero- and one-based indexing.

There are no good arguments for things like zero or the empty string being falsey.

JavaScript looks up object property names at access time. It's always quicker to compare a one-byte string than a ten-byte string.

"A foolish consistency is the hobgoblin of brainlets."
Yes, there are. Programmer convenience is a perfectly acceptable argument.

>There are no good arguments for things like zero or the empty string being falsey.
how about a null pointer/reference?

if you make a vs code plugin for this style I'll use it

>Yes, there are. Programmer convenience is a perfectly acceptable argument.
It may be more convenient to write, but less convenient to read or debug. It's fine for playing code golf, but not so much for critical or long-lived software.

Maybe, but I prefer either dedicated null-checking and null-coalescing features, or as languages like Common Lisp do, make it the only falsey value, having no dedicated false. Or just not let any program with any potential for null pointers compile at all.

VS code + vim plugin is my main editor. The only problem is that it sometimes stays on the same indent level when it's not needed. What should the plugin do? This style is very vim-oriented already. j-ump around by relative line number (:set rnu) and do most fucking common changes by cc-hainging whole lots of lines or blocks by f(ci(, not having to pinpoint W-ords in argument lists and other stuff.

>It may be more convenient to write, but less convenient to read or debug. It's fine for playing code golf, but not so much for critical or long-lived software.
It almost sounds as if you hate typed nils.

Plugin can automatically reformat code as you write it. You basically will never have to press space/tab/enter.

This made me so angry I wish I could revoke that faggot's text editing license

Everyone knows that, the joke is that it's C, so minimizing the code is pointless, when it all gets compiled anyway, unlike javascript.

Javascript is JIT-compiled unless you're in a really dumb environment or use a debugger statement. There is absolutely no sense to minify code manually, in any language.

>plain text

No such thing. All text is encoded.

>I'm smarter than a kernel programmer

You're just a faggot who's got nothing substantial to add to any discussion. That's why you pick random little things that don't matter to bikeshed on.

I would assume that minifying could still improve JIT compilation times by a small fraction, and it definitely speeds up downloads when you use one of those fuckhuge libraries, even if we keep in mind that it is usually compressed before being transmitted to the client.

People do it to save bandwidth, though it doesn't matter much with gzip'ed HTTP.

Funny anecdote: one JS engine wasn't optimizing functions below a specific char count because of a simple heuristic... and some people noticed because their minified code was actually much slower than the normal version.

>being this fucktarded

Javascript code will be transmitted over the network over and over again, and forms a significant sum of the page's size. Minifying Javascript is a huge help. You don't minify just Javascript either, you minify the HTML and CSS too.

It's got nothing to do with tokenizer and parser performance and everything to do with saving money in bandwidth costs and increasing page download speed and therefore user-perceived responsiveness. You must be stupid and uninformed if you think there's no reason to do this.

anonymous functions were a mistake

It's just a different code style, it's hardly worth getting mad over. Besides, when it comes to things like kernel development it's generally a good thing to be as explicit as possible.

Linux devs != Github staff

there are measures inplace to replace torvalds and stallman with someone competent and pure right?

>he thinks a byte is always 8 bits

What the fuck else would we write programs in?

I think it was Knuth who said computer programs are written primarily to be read by humans and only incidentally by computers.

It has been for a long time. I'm salty that the network people still say "octet" just because in the 80s there were still PDP-xx machines with 9-bit bytes around. Dump that ugly fucking word, it's a fucking byte.

This PR is pretty useless, why would anyone actually do this for a language that uses a compiler?
Interpreted languages will only benefit from this, since every line is read every time it has to be executed. A compiler doesn't care, this only hurts and hinders developing the platform.
You now need a separate process for reading and writing to the source code, like HAHAH, what the fuck mate.

This is what JavaScript does to the world

that link is a joke
nobody that stupid would actually do that

emoji

Write a patch cunt.

Labview is trash fami.

>Const whenever possible
why?

>This makes me think, how much effort would it be to alter a C compiler such that it treats bold, italic, and standard text differently; allowing you to use variable names that read the same but are separate?
Not vastly hard, since it's just rewriting the parser front end, but it's a bunch of work that current compiler writers just can't be fucked to do. If you want it, get coding.

If you do this, make bold be for arrays. You see that notation quite a bit in physics.

>but it's a bunch of work that current compiler writers just can't be fucked to do. If you want it, get coding.
I was part of a group project to make an event based language, E--, compiler. I don't think I could even get myself to rewrite the parser front end if somebody paid me. My first time making a linux kernal module was less frustrating (only by a bit, though.)

just a clueless end user who only cares about the end product
stripping comments and whitespace is a sensible thing to do on a final distribution of an interpreted script, much like stripping debug symbols from a compiled binary is
it just has no place being done to the development sources