/dpt/ - Daily Programming Thread

old thread: Unix beard edition!
What are you working on, Sup Forums?

Other urls found in this thread:

github.com/WhatsUpForum/prim
github.com/ltcmelo/psychec
twitter.com/SFWRedditImages

fuck opengl

Please don't post my wife

Because traditional exception handling is bloated, slow, and mucks up your code-base.

What's your favorite editor theme?

I'm talking about algebraic effects, which are another step up.

>bloated, slow
Bit of extra executable size, zero runtime cost if the error doesn't occur. Sometimes you know that an error might occur fairly often especially if it is based on user input so you use ADTs instead.

>mucks up your code-base.
Not in a decent language that can abstract it (as well as ADTs) into an algebraic effect, which you would know I was talking about if you had read the conversation chain.

m-muh zero cost exceptions!

We can never have "better C" because anybody who tries, does anything but C.
They do
>nonstandard ABI
>(((safety)))
>answers that make even Markus think that PHP isn't that bad compared to them

We can never have better C because Cniles will reject anything that isn't C.

I'm designing a language that adds generics, traits, and closures with a more consistent ABI, but not safety.

People are using gcc extensions.

Do Cniles reject HolyC tho? Serious question.

dumb sepples/rust/java/nim poster

They haven't even migrated to C99.

>generics, traits, and closures
You've already strayed from God's light.

Retard.

What is the easiest way to get a p2p connection going between to desktops behind NATs in C?

Add variant types and call it Rust.

>suggest D's better C
>But I might not be able to use this random library!

>talking to a C zealot about (missing feature)
>I can just use a macro!

Cniles are like women, they don't know what they want.

Bjarne says that exceptions are ok

Bjarne is a fuckup who doesn't know shit, and most of the good ideas in sepples were stumbled on entirely by mistake

>>talking to a C zealot about (missing feature)
>>I can just use a macro!
Are you this tard by any chance ?

Bjarne stopped caring a decade ago.

i am not.

Variant types are purely a safe restriction of tagged unions. Pattern matching can be recovered with traits and anonymous trait impls.
struct Result(T, E: Type) {
tag: enum { SUCCESS, ERROR },
union {
success: T,
error: E
}
}

impl[T, E: Type] Result(T, E) {
fn success(x: T) -> Self {
struct {
tag = Self::SUCCESS,
success = x
}
}

fn error(e: E) -> Self {
struct {
tag = Self::ERROR,
error = e
}
}

fn match[U: Type](self, p: trait {
fn success(self, x: T) -> U,
fn error(self, e: E) -> U
}) -> U {
case self.tag {
SUCCESS -> p.success(self.success),
ERROR -> p.error(self.error)
}
}

fn map[U: Type](self, f: Fn(T) -> U) -> Result(U, E) {
self.match(impl {
success(x) -> Self::success(f(x)),
error(e) -> Self::error(e)
})
}
}

It's not inconceivable that this machinery could be automated with a macro.

Cool.

Tamamo sure is cute tonight.

I made a java program that prints all the primenumbers from 0 - 100 :)

public class primeme {

@SuppressWarnings("unused")
public static void main(String args[]) {


for (int tal = 0; tal

Sorry for the Swedish btw.
Here's a GitHub link for anybody who wants it lmfao.
github.com/WhatsUpForum/prim

can anyone explain to me why modules > headers in c++

just google lol

They're not.
It's literally a MEE TOO feature that's 20 years too late. Even perl had this shit for fucks sake.

>two files or one
wow, what a conundrum.

Headers are pure text inclusion, so you need machinery like header guards and maybe a separate namespace feature to prevent collisions. You also need to separate the public interface and the private implementation into multiple files, whereas with modules you can use public/private modifiers. The module hierarchy may also determine the file hierarchy, in which case you don't need to list all the source files of your project to compile it.

tl;dr a module is all of these in one:
- header
- translation unit
- namespace
- maybe an entry in a makefile
Some languages even have first class modules that are used for generic programming, like MLs.

makes sense, thanks

But what about readability? Like, I've always found it neat to give .h file a quick look to get a "summary" of what I'm gonna use, without implementation stuff. And how does one define public apis with modules?

>with modules you can use public/private modifiers

>adding unsafe unions

>What are you working on, Sup Forums?
Still porting parsers to Web Assembly. No regex allowed.

I'm not concerned with language enforced safety.

Do you guys have some good tips for teaching others programming?
I am in no way a teacher, but I was told to teach a few people C++, cmake, ROS, Qt, Eigen and Opencv.
They claim to have had a course in C++ before, but I need to get them up to speed so they can use programming in their bachelor thesis.
I only have 3 hours a week with them, so want to stay practical.
I have an overarching goal in mind and every time, we move closer to that, but I worry they just write what I tell them to write rather than learning any of the stuff.
I initially started out with using qtcreator, but it crashed a lot and I just switched them to an editor + terminal.
And regarding ROS, I also have them write the cmake files manually instead of using the catkin tools as I believe it is the fastest way to force them to think about the project they make.
I also make them comment stuff and draw sketches of what we are making, but I feel it isn't enough.
Any advice?

>zero runtime cost if the error doesn't occur
You have to do some sort of runtime check at some point to know whether you want to throw or not.

Stop
spacing

your

code

like this.

It's

retarded
.

It's a different set. Modules for instance don't allow you to build against binary distributions of a library.

True, but how do you suggest getting rid of that check? I should say "zero overhead" because that check exists regardless of whether you use an ADT, throw an exception, or simply abort. If you ignore the check entirely, you could run into undefined behaviour.

Why? It makes it easier to read!

>>>/reddit/

>I should say "zero overhead"
Now you got it my man.

>Modules for instance don't allow you to build against binary distributions of a library.
What gives you this idea?

so you agree that header files are the way to go?

Reality. You'll have to provide declarations either ways for the type checking pass of your compiler.

What would be a better way to space my code then?

>C/++ with only headers
no
And also no because modules are much more powerful, and easier to work with.

No, the compiler can resolve mutual dependencies by itself.

no.
2 files is better than 1 file.
Nobody wants to read through the source code to find the documentation of a function.

>the compiler can guess types from an ELF file
You either don't know what "binary" means or are being retarded.

>this is your brain on C/++
Why do you need to read the definition and implementation desperately?
Why would having two files for the same module be better?

public class primeme {
@SuppressWarnings("unused")
public static void main(String args[]) {
for (int tal = 0; tal

separately*

I guess modern ides have tools to gather info, like object browser in vs
I mean, Java/C# guys work somehow

that's kinda what I thought! thanks

>binaries have to be ELF
>ELF can't store that kind of information
Your head revolves around C, stop it.

Thanks for helping man, im new to programming, i appreciate it

This, especially working with an existing code base, having a separate file with the function headers and short descriptions of what they do is incredibly helpful.

>binaries have to be ELF
I didn't say that.
>ELF can't store that kind of information
It shouldn't, otherwise it'd be called a source file.

Great. Another ABI to be broken.

>I guess modern ides have tools to gather info, like object browser in vs
With that logic, there is no downside to having a header file as your ide takes care of synchronizing changes but it the upside is it is more readable.
I rarely generate the doxygen documents as it is just as readable in the editor.
With java, it is a must.
I read the documentation to understand the intent and design of the system.
I read the code to understand the implementation.
Those are two separate things, why wouldn't I separate them?
It is also an optimization.

You can generate stripped down "headers" or, you know, documentation from modules easily. If you do it the other way around, you have to maintain both.

>contains machine code to link to but also types for type checking
>this makes it a source file

If the types change in an incompatible way, it will cease to type check.

>this makes it a source file
It does. Refer to .

>you have to maintain both
Sure but so what?
Just like you can get a tool to generate the documentation, you can get tools to apply changes to both if you make any.
But you shouldn't make changes to the API all the time.

>It is also an optimization.
wwwwwwwwwwwwww
Header files are an eternal headache for C/++ optimizations.

>types for type checking
High level types are transparent for the cpu, hey have no business being in object files.

Serious question: what benefits does Haskell provide over NodeJS?

Typeclasses? But I can pass functions via options object, passing them implicitly via type signature does not improve anything.
ADTs? But disjoint unions are good enough for most situatuions.
Types as proof of correctness? Flow does it, without limiting me to BSDM-style correctness.
Performance? V8 is pretty fast and more predictable.
Binaries? Nexe.

So, what could be the practical reason to pick glorious Haskell instead of ubiquitous abomination of JS?

>modules are bad because they can't be distributed as binaries
>actually they can
>NO BINARIES AREN'T SUPPOSED TO HAVE TYPES BECAUSE I SAY SO

literally eliminates all run time errors.

How on earth do headers hurt C optimisation?
99.9% of the time, they just contain function declarations, a few type definitions, and maybe a few macros.

>the CPU cares about your AbstractFactoryBeansSumType

Even then, you can always have the public interface (as a kind of source file) and private implementation (as an object file) generated by the compiler in order to link to a compiled library. You don't have to have it all in one file.

You know that is not really true.
And if I cover all my code with Flow annotations I will achieve the same level of correctness(which is not needed anyway).

No, it doesn't, which is why types would be stripped when compiling an executable or dynamic library. Or you could do as I say here:

>you can always have the public interface (as a kind of source file) and private implementation (as an object file)
You mean a fucking header file?

Recompilation?
Have you never worked in a large C/++ code-base?

Not a header file that you maintain, but part of the compiler's output. There's a massive difference.

What the hell does that have to do with optimisation?

NoRedInk which are the main sponsor of Elm, have 200k LOC in production for years and annually report 0 RTE's from their Elm-base.

When were you when C got type inference?
I was sat at home compiling gentoo when Richie ring
'C has type inference'
'no'
github.com/ltcmelo/psychec

One of those requires significantly less IDE support and leaves less room for error. Not hard to figure out which one I'm talking about.

I find this question very dumb, but okay.

One answer is "sockets bound on local network address".

Nothing. It's /dpt/ being (again) a waste of time.

Elm is a different beast. It is limited, simple and eager. There are some good reasons to pick it over both Haskell and JS/PS.
But I'm currently thinking about backend part, so Elm is irrelevant.

s/compiling/linking

It doesn't matter, Haskell is the same shit, and a fairly popular back-end.

>This is a research project. While we strive to keep the code clean and to make the tool easily accessible, this is an academic effort.
This translates to:
>This project isn't going to be documented or maintained at all. Don't use it

You motherfuckers, now I am even more confused about modules

So:
- header files are bad because reasons (c legacy, preprocessor sucks, compilers work differently, including makes you include all shit etc.)
- modules are all around better because reasons
- yet, header files are still useful for defining handwritten interfaces for your lib

am I close or what is happening here

>python Reconstruct.py path/to/file.c
The absolute city of C

literally just learn a language with modules and it will explain itself.
Also tell me how many new langs aren't using modules and instead sticking with header/src.

C doesn't require modules because all compilers shit out intercompatible shared libraries on account of the lack of name mangling and the rock solid unchanging ABI.

Correct, but C would still be better with modules.

You'll get used to read code with time.
Today you think you need to space everything, tomorrow you might add retarded delimiters in with comments (e.g. /**********************/), and after a while you'll realize that this doesn't make your code more readable. Quite the opposite.

I have a 1d array of the RGBA values of every pixel in a 128x128 image, for a total length of 16,384

I SHOULD be able to just pull the value I want using some math and an offset, but it's fucking up like bonkers so I was wondering if there's a simple way to take it and split it up into a 2d array of 128x(128*4) (row by column*RGBA)?
Would that just be a for loop like
for (int i = 0; i < 128; i++) {
for (int j = 0; j < 128*4; j += 4) {
pixelArr[i][j] = pixelVal[(i+1)*j];
pixelArr[i][j+1] = pixelVal[(i+1)*j+1];
pixelArr[i][j+2] = pixelVal[(i+1)*j+2];
pixelArr[i][j+3] = pixelVal[(i+1)*j+3];
}
}
Or is there some simpler way to do this?
It's in javascript (Yes I know, but I don't have a choice in the matter)

>I SHOULD be able to just pull the value I want using some math and an offset, but it's fucking up like bonkers
>(i+1)*j
No wonder.