/dpt/ - Daily Programming Thread

Old thread: What are you working on, Sup Forums?

Other urls found in this thread:

ghostbin.com/paste/sokqw
cs.brown.edu/~sk/Publications/Papers/Published/fffk-htdp-vs-sicp-journal/paper.pdf
en.wikipedia.org/wiki/Delay_slot
gcc.gnu.org/onlinedocs/gcc-4.4.4/gcc/Structure_002dPacking-Pragmas.html
twitter.com/SFWRedditImages

>I am too stupid to use a better language, therefore I must complain about it!

t. The entire C & Python community

Exactly. Next time you see non-brainlet langs arguing don't bring you garbage ``collected`` language

>lmao GC fags
>*uses OpenGL*
>manual memory
>*creates file*
>masterrace, who
>*requests port*
>can even compete?

Who the hell are you quoting? I didn't see anybody say that.

whoops i had numlock on

Since when is opengl garbage collected?

The only thing it does is automatically free/overwrite data when you recall glBufferData. That's not really garbage collected.

>something automatically done for you
whoa, cant be having that brainlet shit now. If theres something menial, i need to be in control of it.

the point is that there are resources that are handled automatically and not manually

Assumptions which most processors have adapted to serve, curiously enough. 64-bit ARM's limitation of conditional execution to just a small subset of instructions was due to that C compilers couldn't make very good use of the facility generally.

>C
>multithreading

C
Multithreading

Someone asked for a linked list in Rust?
ghostbin.com/paste/sokqw

...

How do I use that to make a circular list?

If only Rust had HKTs, then you could have used a fix encoding

or Free

saved

> 64-bit ARM's limitation of conditional execution to just a small subset of instructions
For, well, the history of computers, conditional execution has been limited to a small number of instructions. Conditional execution is a bitch to engineer.

Unless you mean something that I'm not thinking of. I'm thinking of shit like jne. But even then conditional "add/sub/mov" instructions seems completely useless and too bloated for even CISC.

How do you manage memory in a circular linked list without a GC?

Yes also I was thinking it'd be nice to abstract over the Box and be able to swap in [A]RC instead, but that would require HKT's. I think the reason they won't add HKT's is because it won't work very well with iterators

I thought it was a problem with the lifetime system

>60 lines for a linked list
why do you people use this?

data List a = Null | Cons a (List a)

>How do you manage memory in a circular linked list without a GC?
By just managing your shit properly. You know where the head is.

...

I'm asking in general how would you manage memory in a circular linked list. Once we answer that then we can think about how to implement it in Rust.
Rust doesn't magically solve all GC problems. The lifetime system does enforce constraints on how you structure your data.

LOL what a shit answer.

60 lines for a linked list that manages its memory without a GC, plus a pretty printer and conversion from arrays into linked list. I don't think that's horrible especially for ugly algol languages

>LOL
Fuck off.

I've tried doing the sicp but it really screws with me at certain parts. is it actually a good learning tool because i feel like i keep on hitting road blocks (i tried doing it when i was 17 the first time and now again a few years later)

Oh, the original 32-bit ARM ISA had conditional execution on most instructions, which is really handy when writing constant-time algorithms such as hardware control and crypto, or when skipping relatively short strings of instructions such as in range-checking or non-constant-time counted comparisons. Imagine addne, cmpne, ldne, stne, and so on. Great for assembler programming.
It's not really that bloated, especially on a scalar 3-operand machine, and doesn't add that much complexity to the pipeline. As I understand it, the condition comparison just rides along the pipeline with everything else and the write-back stage just doesn't bother to write back if the condition doesn't match. It doesn't play nice with out-of-order or multiple execution, so it had to go.

"Manual memory management is only free if your time has no value"

SICP is literally just for learning a Lisp.

cs.brown.edu/~sk/Publications/Papers/Published/fffk-htdp-vs-sicp-journal/paper.pdf

you're not the only one
what sucks is I wanna code scheme and Common Lisp and basically all resources assume you've read the SICP
the guile handbook literally discusses extremely basic scheme, has a page of recommended reading with SICP, a racket book, and the fucking lang docs in it, and then discusses no further the languages syntax and use

I'm planning to learn C++ (for now im working as C# web monkey and know C)

Am i fucked? I know people say C++ sucks and stuff, but i feel uncomfortable for not knowing it at all. I love unmanaged/system programming but C is bit too hard sometimes (without ready solutions) in standard library

>discusses no further the languages syntax and use
Maybe because there is no further syntax. Scheme is very small.

C++ isn't really that bad

>my other car is a cdr

>Oh, the original 32-bit ARM ISA had conditional execution on most instructions
God that sounds awful to design with condition prediction desu.

Every instruction potentially causing a mis prediction and dumping the pipeline.

Thinking about the sheer magnitude of such a decoder makes me scared.

Or the number of caches necessary for all the different ways execution could be conditional
>branch prediction cache
>immediate prediction cache
>load prediction cache
Wait a minute...

>addne, cmpne, ldne, stne
How is this different from
>jne skp
>add eax, ebx
>skp:
That seems like a waste of transistors desu.

Just RISC things I guess

Seems like the opposite of what I believed risc was. Just bloat.

isnt it a strong yet convoluted basis for programming though? its just everything is explained so poorly, even normal vs applicative i had to use other resources (sicp wiki, mit page and stackex)

interesting, is htdp a common method (the "best" i suppose)?

>that's bloat
Then use nothing but binary, you picky faggot.

No, its quite literally the actual definition of a "meme book". Its not a general learning tool, its a tool for lisp, because close to all the lessons are focused around lisp-implementations and explanations of the language.

Do not fall for it unless you plan to learn a flavor.

HtDP is used successfully in a few universities. The biggest problem with it IMO is that it's set up to be the idea first semester university CS course, which means that it assumes you're going to enter into more courses afterwards and learn better things. It won't teach you to be an effective programmer on its own but it will set you in the right direction better than most other resources.

God damn, you're fucking stupid.

This works but is it undefined behavior since x is 8 bits and I'm shifting by 16 or does it get promoted or casted since it's being OR'd with a 16 bit value? If it's undefined behavior then should I just make x a uint16 as well? I'm trying to marshal data before sending it over a socket.

uint8_t x = 3;
uint16_t y = 15;
void *z = malloc(24);
*(uint32_t*)z = (x > 16; // a == x
uint16_t b = *(uint16_t*)z; // b == y

The point of predicated execution is that you can have multiple shallow branches in the pipeline at once.

So if I have a simple if/else clause, the branch predictor need not bother, both get sent on, and which takes effect is decided in the retirement step.

>condition prediction
kek
Thankfully, these original ARMs had short pipelines, maybe seven stages at most. I can imagine forwarding logic being a little convoluted though.
>How is this different
I presume you mean jz skp. Anyway, in the ARM case you don't have to dump the pipeline in case the instruction doesn't execute, you just lose the write-back after the execution stage. Constant-time whether the add actually stores anything or not.

*ideal

It's fine, because of integer promotion.
Those casts of z are dodgy as fuck, though.

Predication is a separate mechanism to branch prediction. You can have short branches being dependent on a predication register instead of branching, the branch predictor is not invoked and the front end sends the code of both predicated branches on. The decision about which takes effect is done on the back end.

Look at Itanium for a more complete example of the idea. (iirc arm only has a few predicatable instructions)

Any idea how I would be able to unpack the values in a better way on the client side? I'm not very familliar with this kind of thing yet. The idea of the casts on z were just so that I could easily select a subindex of the bits, I want to retrieve the leftmost 8 bits for a and chop off the bits that don't fit (first 16 only) for b. In a higher level language I think it would look something like this
a := byteArray[:8]
b := byteArray[8:]

You wanted less bloat. So, if that simple shit is "bloat" use pure machine code. Can't get less bloated than that. What a mouthbreathing retard.

jnz and jne are the same thing though.

Why not put NOPs after JMPs, or any other shit that you would want to run at the same time, instead of prediction? Didn't SPARCs do that?

I'm not sure what you mean. Out of order execution? CPUs do that but it can only go so far.

Nowadays the latency of instruction fetches is crazy high so you wanna know where to fetching instructions before the jump is even executed. Not trying to predict where a jump goes to and if the program will take it is just pain. You don't wanna stall waiting for new instructions.

If I recall correctly, there are some weird architectures where a jump instruction won't happen until AFTER the next instruction.
I think MIPS was one.

Checked.

I know. I said jz, not jnz, since we were using ne for an example.

Oh, I thought you were talking about something like a branch prediction cache where the last branch taken state is cached and assumed the most likely result of the next time that instruction is executed, thus steering fetch and decode stages. ARM cores are far too smol and simple for such extravagance.
If I understand correctly, ARM appears to have used predication, with the condition code register and instruction predicate fixed as the predication register, and only one thread of execution instead of two or three so much simpler.

a) it turned out the branch delay slot didn't get meaningfully filled all that often
b) the complexity of out-of-order execution interacting with delayed branches stunted performance improvements through architecture

Called delay slots. I think MIPS had it, indeed.

en.wikipedia.org/wiki/Delay_slot

>traps are not programming related
WRONG

Hmm Itanium seems neat. What exactly was wrong with it that caused it to be such a commercial failure though?

It wasn't x86.
Jew tax.
Perceived difficulty to write compilers/programs for.

>>manual memory
>>*creates file*
Wait, you don't manually allocate your files using fallocate/ftruncate?

learn to use unions and bitfields

Hol up so you be sayin

>What are you working on, Sup Forums?

Exploring the deepest, darkest corners of Racket.

>traps are not programming related

pic.

The idea was that the compiler would do a lot more optimisation of instruction scheduling, so that the parallel execution logic could be simplified.

It turns out nobody could write a compiler that was all that good at optimising for the architecture's features, so it was never that fast in real world situations.

I kindof want one to play with manually optimising stuff for it, though.

Utterly retarded fucking idiot.
Put a bullet through your head.

What are you actually doing in Rack it?

learning the basics

Some kinds of traps are programming related. Others are not. Where the word "trap" is used to refer to crossdressers and trans women, it is not programming related. Similarly, trap cards in Yugioh and MtG are not programming related, unless of course the MtG trap cards are somehow used to create a Turing machine. Where the word trap is used to refer to a form of exception that causes a switch to kernel mode, it is programming related.

If I use a union I'd only be able to store 1 value, not 2, but I'd be able to store 1 of 2 types of values, is that right? I'm trying to store 2 separate values in 24bits.
As for bitfields, wouldn't the padding mess me up?

I'm not familiar with either so excuse my ignorance, if you could give me an example it might help me understand.

>If I use a union I'd only be able to store 1 value, not 2

union foo {
int32_t i32;
struct {
int16_t i16t;
int16_t i16l;
}
}

Foo foo = {.i32 = 0xAAAAFFFF};
cout

Itanic wasn't helped much by AMD bringing x86_64 and the option of why bother to the table.

I need to get a good FPGA board and brush up on my Verilog skills. I made a shitty little 4-stage CPU years ago but it never kissed any silicon, and it might be fun to rectify that.

Nice. I made a shitty CPU in logisim, also. Might be nice to see it physically working.

cisc: raises the complexity at the hardware level (cpu) but lower it at the software level (compilers)
risc: the inverse of cisc
Itanium: raises the complexity at both level for lower performances.

no bet itanium was doomed, it's pure crap and just another attempt form intel to lock the cpu market for themselves.

>"Manual memory management is only free if your time has no value"
*only if you're retarded.

>Be engineer
>Have to take a statistics unit
>Making us learn R

is R a good language worth learning with diligence?

Thanks, I'm gonna experiment a bit with this.

Well, Itanium required the compiler to form instructions into groups, and could do away with some of the traditional instruction scheduling logic on the chip in tradeoff.

There just weren't compilers that could optimise well enough for it to make it pay off. And the onboard x86 emulation was awfully slow.

If you ever need to statistics and draw fancy graphs, it's good to know R. You could also just Python.

Why did people stop bolting on features to shells?
How is it possible that bash has /dev/tcp and arrays but no floating point numbers?

you can also use arrays
union foo {
int32_t i32;
struct {
int16_t i16t;
int16_t i16l;
}
int8_t bytes[4];
}

Because at some point, people realised that there is some point of complexity where you should stop using a shell scripting language and move to a "real" programming language.
Just look at how dead Perl currently is.

Funny that you should bring this up today. I just grabbed pocorgtfo issue 0x15 released mid-June and it promises to be another goodie for those of us who love weird machines.
>Ryan Speers and Travis Goodspeed have been toying around with MIPS anti-emulation techniques, which this journal last covered in PoCkGTFO 6:6 by Craig Heffner. This new technique, found on page 76, involves abusing the real behavior of a branch-delay slot, which is a bit more complicated than what you might remember from your Hennessy and Patterson textbook.

The grand experiment of that CPU and ISA (16-bit instruction word, 16/32-bit registers) was a 12-bit constant extension prefix instruction that would prepend 12 bits to any instruction's immediate or displacement operand, which was sometimes very small if not implicitly zero. I got it to timing closure at almost 66MHz in a Spartan-3, iirc.

Why carry around all that shell baggage when you could do most of what they and their ancillary programs did in code, and system() out when necessary?
>also, try sleep 0.5

Because we no longer need fucking shell shit.

Is it a good programming language? No, not really. Is it a good domain specific language for statistics? Absolutely. If you plan on doing a lot of work in statistics, you should get to know R quite well.

Im making Half Life: Odyssey's End

Says the dude who finds 3 fucking variables "omg so bloated." There are real bloated languages out there and you call a lightweight one bloated. What next, HTML is too complicated? You're either extremely lazy or such a retard that it truly shocks me that you can even form words.

It's 26 lines, the rest is just a test case and some extra traits. Stop being a dumb CIA nigger.

Because shell languages are not designed for these sorts of tasks. They are for automating processes, and maybe a little bit of text manipulation. If you need to do complex arithmetic, it might be better to use a real scripting language, like Perl, Python, or Ruby.

If you absolutely need this sort of functionality in a shell language... you may be interested in PowerShell, which is now cross-platform and free as in freedom.

Stop pushing your shitty POOshell.

Someone could repackage all of those "programs" as functions for a Haskell or OCaml-like language

Cool. Thanks. Just sounds like a chore because I already am advancing with Android and python

I can't figure out how to do this in a way that results in a 24bit union, the struct inside must be getting padded to 4 bytes, no matter which order I use too.
union {
struct {
uint8_t header;
uint16_t payload;
};
unsigned int bytes : 3;
} test;
//sizeof test == 4


Is there any standard way for me to make sure it's exactly 24 bits?

what compiler are you using? with gcc

union {
struct {
uint8_t header;
uint16_t payload;
};
unsigned int bytes : 3;
} test __attribute__((packed));

I'm learning Rust and got a segfault. ;/
I thought Rust was supposed to stop these kinds of things.

There is no standard way, padding is compiler-dependant.

gcc.gnu.org/onlinedocs/gcc-4.4.4/gcc/Structure_002dPacking-Pragmas.html
Try this (push, 1 and pop), should work on GCC and MSC.

no pl will be enough to overcome programmer idiocy

Dang, I was hoping there would be a standard/portable way to pack it. If it's compiler dependent I may just stick with my current method of mangling it all into a fixed size variable with bitwise operators before sending it over the network, I can still use the union like that everywhere else though, just not during send.