/dpt/ - Daily Progtamming Thread

What are you working on, Sup Forums?

Old thread:

Other urls found in this thread:

gcc.gnu.org/onlinedocs/gccint/Tail-Calls.html
youtube.com/watch?v=1S1fISh-pag
twitter.com/SFWRedditImages

anime

Haskell!

First for wishing I had just became a plumber or something.

Still the same thing: a stack-based VM for embeddable scripting languages. It can be programmed directly in a (somewhat) Forth-like language, but it provides a C JIT compilation API to be used by user ESLs. The core idea is to enable easy interop between multiple domain-specific scripting languages within the same host application through the shared argument stack. It provides native support for an object system that is lightweight but highly customizable and extensible (through the use of meta-objects), so objects can be directly shared between different ESLs with potentially different object systems.

Here's the definition of some basic constructs within the stack language itself:
defun # >> compile-call end
>>
# get setmac

defun peek >> dup >
defmacro get setmac

defmacro ' >> compile-literal end

defmacro !call call end

defmacro ! >> get # !call end

defmacro !>> >> end

defmacro ## >> compile-literal ' compile compile-call end

defmacro if
## not
# ?jump
end

defmacro endif
# resolve:
end

defmacro else
# jump
swap
# resolve:
end

defmacro while
# label:
end

defmacro do
## not # ?jump
end

defmacro repeat
swap
# jump # to-label
# resolve:
end

defmacro /*
while
>> ' */ = not
do repeat
end

/* now we can add C-style comments! */

defmacro {
## {}
## dup
>> compile-literal >> drop
## swap
end

defmacro ;
## swap
## setf
## dup
>> compile-literal >> drop
## swap
end

defmacro }
## swap
## setf
end

...

Have fun.
#define A(x, s) x s x
#define B(x, s) A(x, s) s A(x, s)
#define C(x, s) B(x, s) s B(x, s)
#define D(x, s) C(x, s) s C(x, s)
#define E(x, s) D(x, s) s D(x, s)

int product(int x, int y) {
int t = 1, p = (-!!(x & 1) & y);
E(p += (t

>Wrote a 4 line python script to save myself 15 minutes off my computer logic class assignment

>4 line python script

make it a one-liner

Meh, I'll never use it again.

#include

int add(int a, int b);

int main()
{
int a, b;

scanf("%d%d", &a, &b);

printf("%d\n", add(a,b));

return 0;
}

int
add(int a, int b)
{
int ans;

if (b == 1)
ans = a;
else
ans = a + add(a, b-1);

return ans;
}

does this work with negative numbers?

Mine () does.

Yes

Jesus christ just stop.

>work hard all day dealing with pipes containing people's poop
>come home to your displeased wife and bratty children because you're working a 9 to 5 and can never truly father your children
>wife leaves you for the handsome Chad software developer that has a flexible work schedule and is generally overpaid for the actual work he does
Or
>go to work at a nice clean desk every day and be given trivial tasks your programming automation system handles
>spend your free time talking bullshit with coworkers, doing hobby projects and consultant work
>as you grow older you can give yourself more free time and due to seniority you're permitted to work from home
>can take care of your family by effectively dividing your time between work and home.
>wife is happy and you're happy

But the grass is always greener. I can respect plumbers more. We software developers are just leeches in a dysfunctional corporate system that doesn't understand their core resources.
I doubt any of us who are employed actually think what we're doing is gonna be around in any respectable form in 10 years. Plumbers, field engineers and pixie wranglers actually support society. I respect them for that.

Do python programmers optimize line numbers because it impacts performance or are they just ignorant of what ideal solutions look like?

>t. no-mul brainlet

Dumb frogposter

what if b is negative or zero?

What if b is INT_MAX?

Vibeo gam

I'd just write it in inline asm. Sure I'm likely giving up the compiler optimizations but I'm not confident that compilers will manage something as simple as even.

Shows the corrects results.
user, this is a text book recursion exercise

>asked to implement multiplication
>implements addition instead
>stack overflow for negative b
>"Shows the corrects results"
I see you couldn't come up with anything original for today.

...

well memed

Challenged mentally or mentally challenged?

>execution time 6s
Not that guy but this is interesting. So we underflow the int (undefined behavior) and somehow INT_MIN+INT_MAX-b gives us 6x overflow on the ans and computes the right answer?
Is that's whats happening?
Could you give us the disasm? I'm surprised that it didn't choose 64bit int here. I mean there's no way you did 2^64 iterations of this right?

>if(b == 1) as a stopping condition
>implying anyone needs to run it to see that you're lying
Why are you so desperate for my attention, kid? Go find your biological dad.

>6x overflow
Underflow rather. And I mean 3x. I think.

God it's late. No I mean overflow. Ans+recursivecall.
And it should be 3x. As we need to get back to close to 0 for the last couple iterations.

how do I make my own programmable text editor like emacs?

>gui or cli

both. UI is just rendering and input that maps to text editing actions.
i'm thinking about the internal data structures. how the fuck is text represented? an array? a rope?

>implying 4 GB stack
That retard is just lying.

Depends on the compilation flags he set if this was actually doing function calls. And the compiler probably decided to optimize it. This doesn't require function calls. It can easily do TCO.

That's not the interesting bit though.

>It can easily do TCO.
None of them do. He could've done something like #define int short, but then it wouldn't take 6 seconds to run.

>an actual interesting post? in MY dpt?

That's really cool. How did you implement the JIT compiler? Do you plan to use it to support native calling conventions?

I'm also implementing my own language. Focusing on zero dependencies and operating system integration.

noobs

>it wouldn't take 6s
Not at 64 bit int (which is what I would have expected it to do). But at 32bit int I did pic as a back of the envelope and it's in the right order of magnitude.
>none of the optimizers do TCO
No you're just being silly now.
>9.6s
That's unexpected. The b you chose is insignificant far away from -6 (453464 is less than 1-0.9999 of 2^32). You probably have a bunch of junk on your computer eating processor time. I'd get that checked if I we're you.

>bunch of junk on my computer
Yeah, it's windows 10

I'm that's not sufficient to explain this user. You don't expect (at least) a 50% variability when running software on windows just because it's windows.
Try malware bytes. It distinguishes malware, PUP and legitimate software.

-- | The 'permutations' function returns the list of all permutations of the argument.
--
-- > permutations "abc" == ["abc","bac","cba","bca","cab","acb"]
permutations :: [a] -> [[a]]
permutations xs0 = xs0 : perms xs0 []
where
perms [] _ = []
perms (t:ts) is = foldr interleave (perms ts (t:is)) (permutations is)
where interleave xs r = let (_,zs) = interleave' id xs r in zs
interleave' _ [] r = (ts, r)
interleave' f (y:ys) r = let (us,zs) = interleave' (f . (y:)) ys r
in (y:us, f (t:y:us) : zs)


There has to be a better way to write this.

Fuck Haskell programmers!

I enjoy [Any Language that isn't C++]

>C compilers do TCO
Show me a mainstream one that does.

How does a computer translate between binary and decimal representation, and vice versa? Take for example Python. When I write in the interpreter

print(37 + 45)


how does Python know that the symbol 37 translates to the binary pattern 00000000000000000000000000100101? Likewise, how does it know that the result of the addition, 00000000000000000000000001010010, should be displayed as 82? Does it use some sort of lookup table, or does it do the binary to decimal computation on the fly? How does that work?

>thinking anyone gives a fuck about why your trash doesn't produce a stack overflow when you do a billion interactions to compute 6*(-4)

Why are you still replying then?
Doesn't matter, as if brainlet like yourself could ever understand how programming works.
btw how's that webdev project going, user?

>How did you implement the JIT compiler?
Calling it a JIT compiler probably makes it sound fancier than it really is. It doesn't dynamically optimize and recompile code (at least for now). It just does compilation on the fly when needed using indirect threading, so it's actually fairly simple.

>Do you plan to use it to support native calling conventions?
Not at this point, but I may consider it in the future. For now, it's very easy to just bind C functions.

>I'm also implementing my own language. Focusing on zero dependencies and operating system integration.
Zero dependencies and remaining lightweight are two of my major focuses. What do you mean by OS integration?

>programming works by producing shit-tier buggy code that takes billions of iterations to multiply two numbers when it doesn't cause a stack overflow

>buggy
What bug?

user, are you upset over something?
Could it be because you are confused?

>What bug?
The bug where multiplying by a negative number takes your program 6 seconds, assuming it doesn't just overflow the stack. That bug, retard.
>hurrr it's supposed to do that
Okay, kid. That's some top-tier programming.

gcc.gnu.org/onlinedocs/gccint/Tail-Calls.html
I'm not gonna show you how the earth is actually round as well.

I don't know what to say, user.. I'm afraid this image will confuse you even more.

When working with real-real numbers, and not making any mistakes even in 0.0000001's which language is the best option?

I am thinking C, right now I am working with Python and scipy, numpy and all the other science and numbers libs, tried Cython too. But the calculations are so many that its really slow ( yeah I guess better algorithm is where the money is, but its just a proof of concept for now) so C is good right? Anything to keep in mind? Also don't tell me C++ (if its not faster than C), the thing has 7 functions, that can be implemented as 7 functions in C as well, no need for classes or anything fancy.

dumb frogposter

Unironically Cobol.

Java, java is realy fast in 2017, also there is functions in java too, use java sir.

You're the only one confused, though. Unlike you, I actually understand how two's complement works and rely on it for my implementation (which, by the way, doesn't take 6 seconds). For you it's a happy accident.

>Arithmetic in BigDecimal with Java
I wouldn't inflict that on my worst enemy.

>accident

>t. literal retard who can't add two numbers

>it doesn't optimize

Well it still compiles code right? I once wrote a simple JIT myself, it allocates executable pages and emits instructions into it structured using the native calling convention. I call the pointer to that location as if it was a native function

>what do you mean

Well I'm designing things in such way that there's no standard library. Besides numbers and symbols and text, the only one true primitive provided is a system-call function that takes registers as arguments. It can dynamically perform any Linux x86_64 system call. I only really plan to support linux.

I designed the garbage collection system so that the implementation would boot with a fixed set of pre-allocated objects. The OS makes room for them when it loads the binary into memory. The language can then bootstrap itself. I made it so I can give arbitrary memory blocks to the garbage collector from within the language itself; when it needs more memory, it either collects garbage or evaluates a customizable in-language memory-allocation function, which uses a separate array of pre-allocated memory to guarantee that the function will never need memory for itself. It essentially calls (system-call 'mmap size etc...) and returns the pointer to the language's garbage collection implementation (written in C). So part of the GC is implemented in my own language because I wanted every single system-call to go through that one primitive

What were you saying,user?
#include

int add(int a, int b);

int main()
{
int a, b;

scanf("%d%d", &a, &b);

printf("%d\n", add(a,b));

return 0;
}

int
add(int a, int b)
{
int ans;

if (b == 1)
ans = a + 1;
else
ans =add(a+1, b-1);

return ans;
}

That you can't add two numbers together. It's cute how desperate you are for my attention, though. Have a (You).

>that awful style
>defining functions after main
Yikes.

dumb frogposter

>program adds two numbers
>can't add two numbers

So mentally challenged after all?

GCC, ICC, and Clang all do TCO with -O2 optimization or better.

please shill me D and or Genie

>taking 20 seconds to do 1+(-1) due to a faulty algorithm
>hurr i can totally do it
The sad part is that you don't even understand why it works at all.

>How does a computer
>how does Python
Two very different things.

The computer naively works in binary and in the case of integer decimal values it's binary represented numbers (known as integers). Anything else is simply something we ascribe meaning to through having translations like the ASCII table (see pic, what's actually stored is the binary for the decimals 0-127). When you write your 37 you've written two characters. At some point, compilation or interpretation by the compiler or interpreter they will take these two characters, decide that they're actually a pair representing a decimal and then they will translate them to decimal (in the case of numbers in ascii you just take -48, as you can see from the table '0' is 48, so 48-48=0). You then multiply them by their relation to the decimal point. So for 37 you take ascii 3-48=3. Ascii 7-48=7. Then you multiply 3 by 10 and add that to the result, then you multiply 7 by 1 and add to the result. Then you have the binary representation of 37.

>how python does it
So. I kinda covered what python does above. If you're just running python myprogram.py your python program will eventually arrive at the line with the addition, read the line character by character, figure out what it means in this context, do the translation from ascii to binary and store the values. Generate the code to do the addition between the two numbers. Run the code with the two numbers in the right places in memory to do the math. When the math is done they have a result they then send to print.
That's at least one way to do it. There's very many ways to do it. They could just read the ascii, figure out that this is a very simple process, do the math using code that already existed and then send the result to print().
I'll continue this post. Please think of questions or things you need clarified.

I'm growing tired of your confusion. At this point you're defiantly the one who doesn't understand anything.

>y-y-you don't understand
Is that why my mul implementation works instantaneously for any input and uses the same two's complement trick that makes your buggy code produce a correct result?

>y-y-you don't understand
Funny that you're quoting your own previous posts.
>buggy code
Again. What bug?

Trying to port a small C++ program that uses modern GL with gl3w to python.

Are there any modern opengl bindings for python that are *actually* low level?

>What bug?
The one that makes your shitty code take 20 seconds to add 1 to -1. We could do this all day. I know single mom household kids need some male attention. Have another (You).

So I'm trying to remove nodes from a BST in java. I start out with a pointer to the root, and using a while loop I set it to its left or right child until the corresponding node to be removed is found.
Here's a simplified version:
Node temp = root;

while (true) {
if (left)
temp = temp.left;
else if (right)
temp = temp.right;
else { //found the node
//do all the bullshit, update height, change parents, change children, etc
//for simplicity's sake, child means either left or right, depending on which it should be
//I've tried both of the following two lines, and I'm still getting the same (incorrect) results
temp = new Node(temp.child.data, temp.child.left, temp.child.right);
temp = temp.child;
return;
}
}

I've been debugging for over an hour, and my logic seems to be correct; but once the function returns, nothing is actually updated. I don't understand why this is, since isn't temp indirectly referencing the actual physical nodes?
The only way I've actually been able to get it to sorta work is if I add root = temp; at the end of the function, and change some returns to breaks. However, this only works if the node to be removed is the root of the tree. Any help would be appreciated.

If this were C, all of this would happen at compile time except the actual printing. Meaning when you 'build' the program it will figure out that "yeah, 37+45, I have everything i need to do this now". So it does that immediately.
The reason I want to show you C is because I don't know how to get the assembly code for python code.
The picture has the C code on the left and the associated assembly on the right. The bolded text is the part we need to care about. Look at the line movl $82, %esi. That means 'put 82 in the register esi'. This is part of the actual instructions the CPU reads. The instructions
are represented as ones and zeroes in a more efficient format than text, but the ones and zeroes mention 82, the instruction and where to put the 82.
The next line calls printf. I'm gonna skip details about that. What it will do when running is change the instruction pointer (the thing which says what assembly line we're executing) to the place where the function printf (similar to pythons print) is stored. It expects to be able to read the value to print out of EAX, so that's why we put it there.
It will then translate the 82 we had in binary to '8' and '2', that it can then send to the terminal. The terminal will later be drawn (in pixels, depending on font) to the screen buffer, which will then be sent to the screen which then is displayed to you so you can see it on the screen.

When you write print(45+37) you're asking the computer to do a lot. Be thankful for having such a relatively simple environment compared to working at a low level.

I'd really like to know how much of this you understood because I want to learn to teach people. This really is too much for such a small question I know. It's nowhere near the appropriate answer. But you have something I don't, a lack of understanding. And I can't get that. So I'd really like some feedback on what you got and what you didn't. The first part of this should be sufficient for your question..

>add
You mean multiply.
This is addition and it took 65 seconds wtf?
Really makes you think.

The bug that makes your code take 65 seconds to add two numbers.

this addition function isn't portable, you have to build a stack instead

have you considered using recursion?

intermiediate C programmer here, is continuing to master C worth it or should I just learn C++? I already know other languages like java and c# but i always really liked c and would be okay with doing systems programming in the future. Is there any point in mastering pure C?

I refuse to believe one person can be this confused.
What could I expect, this is /dpt/ after all.

Iteration is better

People who act like understanding what goes on under the hood matters at all in fucking 2017 never fail to make me laugh

Happy you're enjoying yourself. People like me make your ignorance possible. What you're saying makes as much sense as saying that nobody needs to know about the electric grid.

When I find my code in tons of trouble,
Friends and colleagues come to me,
Speaking words of wisdom:
"Write in C."

As the deadline fast approaches,
And bugs are all that I can see,
Somewhere, someone whispers:
"Write in C."

Write in C, Write in C,
Write in C, oh, Write in C.
LOGO's dead and buried,
Write in C.

I used to write a lot of FORTRAN,
For science it worked flawlessly.
Try using it for graphics!
Write in C.

If you've just spent nearly 30 hours
Debugging some assembly,
Soon you will be glad to
Write in C.

Write in C, Write in C,
Write in C, yeah, Write in C.
Only wimps use BASIC.
Write in C.

Write in C, Write in C
Write in C, oh, Write in C.
Pascal won't quite cut it.
Write in C.

Write in C, Write in C,
Write in C, yeah, Write in C.
Don't even mention COBOL.
Write in C.

have fun getting cucked to death for the rest of your life while I write software that changes the world

fukken saved

I'm quite well paid user. Certainly way more than some loser who thinks his code can 'change the world'. Can you even consider yourself an engineer?

>What are you working on, Sup Forums?
A sadpanda client in Emacs, but I'm to drunk to think in Lisp right now.

user..
youtube.com/watch?v=1S1fISh-pag

Lol just shitposting user

Just changed it to use a recursive helper function and it's still doing the exact same thing.
Someone please help I literally have no idea what I'm doing wrong.

>Well it still compiles code right?
Yes. [pic related] is roughly (not 100% accurate) what happens. Ordinarily the VM runs executes indirectly threaded functions (the horizontal neighbor nodes are actually neighbors in an array), but it can selectively flatten the function bodies to avoid the indirection overhead.

>Well I'm designing things in such way that there's no standard library.
It sounds like an interesting project. What use cases are you aiming for?

>t. literal retard who can't add two numbers

>added two numbers
>can't add two numbers
user, we've been over this.

>t. literal retard who can't add two numbers

>t. literal retard who can't add two numbers
Come back when you fix the retarded bug in your function. You know, the one that makes it do a billion iterations to finish the job even for small inputs.