/dpt/ - Daily Programming Thread

old thread: What are you working on, Sup Forums?

Other urls found in this thread:

en.wikipedia.org/wiki/Memory_timings
en.wikipedia.org/wiki/Memory_refresh
en.wikipedia.org/wiki/Static_random-access_memory
fna-xna.github.io/
amazon.com/Bebop-Boolean-Boogie-Third-Unconventional/dp/1856175073
lwn.net/Articles/250967/
ilikebigbits.com/blog/2014/4/21/the-myth-of-ram-part-i
en.wikipedia.org/wiki/Go_(programming_language)#Projects_using_Go
boards.Sup
en.wikipedia.org/wiki/Write_combining
thepiratebay.org/torrent/9394908/C_Primer_Plus_6th_Edition_Dec_2013
twitter.com/AnonBabble

So basically I want to become a GNOME developer (I have a very precise target to accomplish). So I'm thinking of starting my programming with C.

I do know a bit of a python but that's irrelevant.

Any guide lines?

learning XNA/monogame

No, they aren't.
Not at all.

why would you attempt to do stateful programming in a stateless language?

It isn't necessarily stateful, these work for any monad or applicative

is that game dev anime worth watching?

Would the easiest way to reverse a queue be to use a stack?

array

>anime
>being worth watching

this all anime is worth watching

...

Is that the legendary 3D chess?

>What are you working on, Sup Forums?
9D chess

Is that like 1-D chess?

How many time dimensions does your chess game have?

-2.
It has 2 time dimensions, but the time in both of them flows backwards.

I'm working on preserving an old google app engine project which uses an hilarious amount of deprecated APIs.

if ( op.image.IsAnime() == true )
post("Fuck you for watching anime you weeaboo fuck.\n");

How is the IsAnime method implemented?

Trained neural network specially designed to weed out degeneracy

So it's literally a digital Hitler.

> == true

Google image search. If anything referencing Asia comes up, return true.

Is random access REALLY constant time?

When the electrons hit the metal, wouldn't "finding" address 0xFFFFFFFFFFFFFFFF take longer than finding address 0x0000000000000000.

You calculate an address, but it still has to GET there. How does it GET there?

I'm imagining RAM being a train station with 2^64 different platforms. You still have to walk all the way over to the last platform even if you know it instantly.

Is anyone ever figuratively Hitler?

Suck my dick brah.

Or is it like a tree structure where it starts searching at the least/most significant bit... which makes the access time of 32-bit memory < 64-bit < 128-bit etc. because of the extra steps it has to go through.

Hi Sup Forums i come to you in my moment of need. I am currently trying to recreate an common api standard working with flask and python but i can't seem to get ahold of my request data.

the api client i am trying to serve sends post requests in json under the application/x-www-form-urlencoded content-type and i can't access the data from request.form in any meaningful manner. I would ideally like to access the json send as a form field name as a python dict so i can easily work with it.

e.g.
if request.headers['Content-Type'] == 'application/json':
apiRequest = request.json
current_app.logger.warning(apiRequest['sid']) '''works'''
elif request.headers['Content-Type'] == 'application/x-www-form-urlencoded':
apiRequest = request.form.to_dict()
current_app.logger.warning(apiRequest.iterkeys().next())
current_app.logger.warning(apiRequest.iterkeys().next()['sid']) '''FAIL'''
'''converting ^ into a dict also fails with this undecipherable error'''
'''ValueError: dictionary update sequence element #0 has length 1; 2 is required'''

to clarify, i can easily extract and parse the json send using the correct content-type but when the client sends it under the wrong content type (which i have no control over) I cant access it like a dict.

Just use a double ended queue and you will never have to worry about this question again.

So you start from a checkmate and have to work your way backwards and sideways until all of the pieces have been put back on the board?

/dpt/-chan, daisuki~

>Is random access REALLY constant time?
No, mainstream memory RAM need to recharge an array line every X times. on-chip cache memory doesn't have that problem since every bit cells has its own power circuit (know as SRAM)

en.wikipedia.org/wiki/Memory_timings
en.wikipedia.org/wiki/Memory_refresh
en.wikipedia.org/wiki/Static_random-access_memory

>You calculate an address, but it still has to GET there. How does it GET there?
A Memory module arranges memory like this

Banks -> Arrays -> Words -> Cells (where one cell = one bit)

The address decoder selects the correct bank, array, word (usually, a word is the smallest addressable unit).
The needed time to a reach a word is the same for every words. (pic)

Yes

fna-xna.github.io/

Thank you for using an anime image!!1!

learning lisp was a mistake

which textbook is that

kek

if request.headers['Content-Type'] == 'application/json':
apiRequest = request.json
current_app.logger.warning(apiRequest['sid'])
elif request.headers['Content-Type'] == 'application/x-www-form-urlencoded':
apiRequest = request.get_json(force=True)
current_app.logger.warning(str(apiRequest['sid']))

btw its coming along swimmingly. i got login working i got getcategories working i got getfeeds working. just tested it with the client. works brilliantly. few more functions and im nearly home

amazon.com/Bebop-Boolean-Boogie-Third-Unconventional/dp/1856175073

Not him, but if anyone wants to know more about memory in general:
>lwn.net/Articles/250967/

It's O(sqrt(N))
ilikebigbits.com/blog/2014/4/21/the-myth-of-ram-part-i

Making a new game a Dwarf Fortress clone. Since I am just in the starting out stages which language should I choose. I am leaning towards C++, Rust or GO. Avoiding FPS death that dwarf fortress has is one of my big priorities. The bigger priority is what will help me out further down the line. If Rust for some reason goes belly up like Ruby did, there isn't much reason to learn it.

>what will help me out further down the line
C++

>Actually telling someone to learn C++

For game development you want something that is reasonably fast and object-oriented.

I tried to do it once in C, but truth be told, C++ is just the better language for those things.

So I have to ask why not Rust then?

It's obvious that user is concerned about languages that are going to be used most often in the industry (otherwise, what would it matter if hipsters stop using it). Between C++, Rust, and Go, C++ both has the greatest numerical dominance at the moment, and will likely maintain the greatest numerical dominance in the near to distant future (at least between these three languages. There is an overall shift away from systems languages for many tasks). I don't foresee Rust or Go getting much use for a while except from Mozilla and Google respectively.

>Avoiding FPS death that dwarf fortress has is one of my big priorities
What makes you think choosing a correct language will make you avoid performance issues in this case?

>will likely maintain the greatest numerical dominance in the near to distant future
We'll see in a year or two.

>Avoiding FPS death that dwarf fortress
this has much more to do with the model than the language, i think

>belly up like Ruby did
explain

Well given that I haven't seen too many cases of Go or Rust being used in industry outside of their parent companies (although I've seen plenty of hipster side projects), I am doubtful. Still, it is an industry known for wild changes and random fads. Anything could happen.

Because it's a retarded language to begin with. There has been enough ire flung at Rust to make it seem not quite ready for such things.

Not him, but a lot of languages just love to allocate their memory from the heap, as if it's the only true kind of memory. Even for really small allocations - they don't use alloca or VLAs. they just internally call malloc/free, and that's that.

That does not sound bad, but:
- once you deal with multiple threads the necessary locking in a global userspace list can cause unnecessary slowdowns.
- it's does scale badly quickly.

I remember having debugged an application of mine where a kernel function used to allocate memory from the heap to transfer a small multibyte string into a wide string. Completely unnecessary. But a lot of programmers just don't give a fuck and rather wonder whether ++i or i++ is faster.

If dat guy wanted to time ram access, he should have mapped the memory as uncached.
Does a cache miss on today intel/amd cpus cost more than an uncached access?

What's your precise target?

Yes. It depends on the cache type (first level, second level, third level), but the minimum amount of cycles such accesses cost are 2 to 3 cycles (not sure about the maximum though). Uncached we are talking about 200 cycles - which causes the current memory line to be loaded into the cache, which evicts the oldest cache line.

And this is not really going to be better in the future. In the past, consoles like the SNES had 6 to 12 cycles per memory access (which was also the reason why, despite having an input clock of ~21 MHz, the actual max speed was more like ~3.5MHz). Because CPUs have become more faster than memory has.

i think go has done okay outside of google. docker is a good advertisement
en.wikipedia.org/wiki/Go_(programming_language)#Projects_using_Go

part of mozilla's problem is that they don't have anything to show off. servo is a long way out

>they don't use alloca or VLAs
Like walking on an icy bridge across a lake of lava. Dynamic memory allocation and the stack don't quite mix. That said, it is nonetheless wise to use the stack wherever appropriate, and when using the heap, it should always be an option to store an entire instance of an object inside of another, rather than only being able to store a pointer. This allows object members to be accessed in linear address order, and decreases the number of times memory needs to be allocated or freed.

>Like walking on an icy bridge across a lake of lava. Dynamic memory allocation and the stack don't quite mix.

Well, the most important compilers (VS and GCC) have support for that. And it's not that hard writing your own small function that reverts the stack in case you need to allocate memory in a loop. I gotta know; I did it before.

(Doesn't work on Windows x64 targets though; VS does not support inline assembly, and I don't know how to tell MASM to create a naked function).

And even in the olden days you could increase the stack size of your application if it wasn't enough. They say that you shouldn't do too much on the stack, but while that's still true and valid, I feel like this has lead to the approach where everyone is afraid of using the stack at all, when a stack allocation and reverting it does cost 50 cycles, while using malloc/free can easily cost more than 1000.

Or shorter: everything in reasonable amounts.

>Well, the most important compilers (VS and GCC) have support for that.
The problem is not one of support. The problem is that dynamic stack allocation has the potential for stack overflow. If you know ahead of time an upper bound on how much memory you need, you can simply allocate the maximum amount that you will need. If you don't, you place your code at risk.

>And it's not that hard writing your own small function that reverts the stack in case you need to allocate memory in a loop. I gotta know; I did it before.
And then you are just overwriting the stack memory you just allocated. In this case, you clearly do not need to have a dynamic allocation function at all.

>(Doesn't work on Windows x64 targets though; VS does not support inline assembly, and I don't know how to tell MASM to create a naked function).
As someone who exclusively uses GCC to program on Windows x64 with very few exceptions*, I am laughing at you.

>And even in the olden days you could increase the stack size of your application if it wasn't enough. They say that you shouldn't do too much on the stack, but while that's still true and valid, I feel like this has lead to the approach where everyone is afraid of using the stack at all, when a stack allocation and reverting it does cost 50 cycles, while using malloc/free can easily cost more than 1000.
Here, we get into the realm of being more platform specific, and also, running into a particularly odd problem: the stack tends to grow down on commonly used platforms, not up. At some point you run into the problem where virtual address 0 is supposed to be invalid for safety's sake, and thus, you cannot grow the stack forever as you can with the heap.

>Or shorter: everything in reasonable amounts.
If the amount is reasonable, can you give it an upper bound? If you can, you do not need dynamic allocation. If you can't, then how is it reasonable to use the stack?

Forgot that asterisk I put in there.

*My one exception is my current use of Visual Studio for use with the Windows Driver Framework for a research project. But when writing Windows Drivers, you really ought not to do inline assembly anyways.

rate my Sup Forums image scraper :))

curl -s 'boards.Sup Forums.org/g/thread/57291269' | tr -d '\n' | grep -o 'href="//i.4cdn.org/[a-zA-Z0-9]*/[0-9]*.[^\"]*" target=' | sed 's/^.*\/\/\([^\"]*\).*$/\http\:\/\/\1/g' | sed -n 'g;n;p' | xargs wget -q

but a cache miss is followed by a memory fetch from the ram, it should require more cycle than a straight uncached access, no?

I don't know why you've always been against VLAs. I guess you're just a filthy sepplesfag who tries to come up with excuses for why features that C has but C++ doesn't are bad.
Are you against recursion because it MIGHT cause a stack overflow? Even if it's infeasible? (e.g. stack overflow on a binary tree).
VLAs, like any other feature, can be used dangerously, but under situations where you can reason that it isn't going to be fucking massive, they're safe to use.

>If you know ahead of time an upper bound on how much memory you need, you can simply allocate the maximum amount that you will need.

>what is recursion?

Your approach would more effectively overflow the stack once we are in a recursive function that needs to store a certain amount of data on the stack that cannot exceed a certain size (we made sure of that). Let's say that certain size is 16 KiB. With your approach each call requires 16 KiB + overhead.

Now we use alloca/VLAs, and now we only use the amount of memory that we actually need + overhead.

Also, there is another problem with "just allocating an array" - which is a speed problem. Assuming you only need little amounts of memory, but in two separate arrays, with a static array size this might cause cache at least one eviction if both fixed arrays do not fit in one cache line. With alloca we at least have the change to avoid that - even if one cache line is not that much, it can still matter. I have once written a partition scanner that was supposed to read lost data from a NTFS partition. When declaring a buffer size I decided for one that wouldn't evict my cache constantly, rather than using one that saved kernel calls and saturated the pipeline. And then I benchmarked it against the same problem, but this time with a huge buffer and very few syscalls.

My first approach - the one that honored the cache size - was measurably faster.

>As someone who exclusively uses GCC to program on Windows x64 with very few exceptions*, I am laughing at you.

The last time I checked GCC for Windows the binary images were bloated as hell. Isn't this the case anymore?

>Here, we get into the realm of being more platform specific

That's why I said: everything in reasonable amounts. I am not saying we need to store 1 MiB on the stack, that's ludicrous. But 60 KiB I'd rather have on the stack than on the heap.

>If you can't, then how is it reasonable to use the stack?

I already addressed that one.

rm -rf /

Watch out.

>I guess you're just a filthy sepplesfag who tries to come up with excuses for why features that C has but C++ doesn't are bad.
To be honest, I enjoy both C and C++. But even when using C, I don't use VLAs, because they're a vulnerability waiting to happen. It has nothing to do with language features.

>Are you against recursion because it MIGHT cause a stack overflow?
>Even if it's infeasible? (e.g. stack overflow on a binary tree).
In most cases, recursion can either be reduced to a jump statement by the compiler (tail recursion) or has a clear upper bound on the amount of stack space used (walking a binary tree can never use more stack frames than the number of bits in an address space). In cases where there is no defined upper bound on the amount of stack memory used for recursion, one should not use recursion, and instead use iteration with a heap-based stack.

>VLAs, like any other feature, can be used dangerously, but under situations where you can reason that it isn't going to be fucking massive, they're safe to use.
If you can reason that it isn't going to be fucking massive, you can reason out an upper bound. Whether it's a 4K block or a 2M block of stack space, you can say, "this is the absolute maximum amount of RAM I could possibly use for this use case, and I can prove that it will never go higher." In this situation, VLAs are useless, and you would be better off sparing yourself a register for some other local variable. You just need to allocate a block big enough, constant-sized block of memory on the stack.

If you CAN'T give it an upper bound, and dynamic allocation is necessary, then there is no possible way by which you can claim VLAs are safe, because there will always be a valid input to the program which will cause it to either crash or corrupt memory.

Depends on it. The problem is that this question is hardware-specific, so I'll answer this from a x64 perspective.

Because there are two things that can determine how a CPU knows that a memory access is going to be uncached:

- MTRRs (Memory Type Register Ranges)
- PATs (Page Table Attributes)

The first one is somewhat deprecated because PATs are more flexible. MTRRs are registers inside a CPU that determine what accesses are cached in what matter. For example, if you write to a PCIe device, and the range is not those of control registers, then you want to write-combine everything. For normal everyday programming you want to have things being written-back.

en.wikipedia.org/wiki/Write_combining

To come back to your question: the MMU, the Memory Management Unit, needs to know not only the mappings, but also the way the memory needs to be accessed. That needs to be determined even before the memory access is done, because - well, CPU caches. Registers. If a range is uncached, it needs to be fetched directly.

The MMU can determine: Yup, it's uncached, so fetch it directly. Or it can say: Nope, it's cached, use the value in the register/CPU cache. And then the cache is searched for the value, and (since you were asking what is faster, uncached or cache miss) it will not find it and issue a memory read.

To answer your question - based on this model, and from my own, probably incomplete knowledge of those things, I'd say that cache miss is slightly slower than directly uncached - because it might waste a few cycles. And we are just talking a few cycles here.

>a few cycle

lookup in l1
lookup in l2
loopup in l3
miss
instruction paused
fetch from ram
fill cache
look up
resume instruction

vs

fetch from ram

i'll help you out:

the biggest fps hog in DF is the pathfinding (it's 3d, too). here's what you need to do to avoid this:

- run each pathfinding computation on a different thread
- cache the previous pathfinds and only compute the different parts

for the first step, rust will be massively helpful in telling you what you can or can't share between threads. it simply won't compile until it works

>Your approach would more effectively overflow the stack once we are in a recursive function that needs to store a certain amount of data on the stack that cannot exceed a certain size (we made sure of that). Let's say that certain size is 16 KiB. With your approach each call requires 16 KiB + overhead

If function B is non-tail recursive and needs N bytes of stack memory after M calls deep, allocate all of this memory in a single buffer in A, and have A call B with a pointer to that giant stack array.

>The last time I checked GCC for Windows the binary images were bloated as hell. Isn't this the case anymore?

This is somewhat the case still, particularly for C++, since MinGW has to package its own libstdc++. C programs are reasonably fine in size.

how to learn code

become trap

Did you honestly believe you'd get a different response than
>install gentoo
?

Here you go kid, knock yourself out:
thepiratebay.org/torrent/9394908/C_Primer_Plus_6th_Edition_Dec_2013

start with code.org

That's a dumb idea. Start with an actual programming language. Get a book or a tutorial, an editor, and a compiler, and start doing the example programs and figuring out how they work.

I've got a tiny website up and running where I want to write experimental web apps. I was thinking of making a simple pixel-art animation. What's the best language to achieve this in?

JavaScript. How the fuck else are you going to animate shit in a browser? Flash is dead.

first you advice a book intended to learn c, not programming, and and now you dismiss scratch as a programming language? k tard

>In most cases, recursion can either be reduced to a jump statement by the compiler (tail recursion)
TCO is not guaranteed. What if you're using a compiler without that optimisations or optimisations turned off?
>If you can reason that it isn't going to be fucking massive
It's more of a case of "I don't expect people to use my program for 4 billion elements". If you want to actually write a program that is designed to work with large amounts of data, you would design it a lot differently than normal, and goes a lot deeper than "VLA vs malloc".
>you would be better off sparing yourself a register for some other local variable
Registers are plentiful. Nobody is going to write code using 20 VLAs simultaneously.
Also, register allocation is hardly something the programmer should be worrying about, as there are all sorts of unpredictable shit (to a mere mortal) that a compiler will optimise out, and you won't have a clue about how the compiler chose to use the registers.

Even then, you don't seem to be aware of the other things that VLAs add to C. It actually does quite a bit to the type system.
void my_fn(size_t n, int array[static n])
{
/* array points to an array of size n and is non-null, otherwise it's a constraint violation */
}
Although that's a somewhat obscure feature, and can apparently lead to more efficient optimisations.
I'm unaware if compilers currently make use of this, though.

Alright mate, no need to get feisty. I'm a relatively new programmer and have only dipped my toes into web dev

>no need to get feisty
This is Sup Forums, idiot. It would be weirder if his reply wasn't aggressive.

var elements = document.getElementsByClassName('postContainer');
var com = document.getElementsByName('com');
var str = com[0].value;


for(var i=0; i < elements.length; i++) {
if(elements[i].getAttribute('id')[elements[i].getAttribute('id').length-1] == elements[i].getAttribute('id')[elements[i].getAttribute('id').length-2])
str += '>>' + elements[i].getAttribute('id').substring(2) + '\n';

}
str += 'checked :^)';
console.log(str);
com[0].value = str;


checked :^)

install gentoo

C is a programming language. If you learn C, you will be learning to program.

Scratch, while Turing complete, can hardly be considered a language, and fails to model what programmers will actually be working with in their future careers. You will be typing lines of code in an editor or an IDE, not dragging blocks around. So one might as well start with a language just like that. One could always start with JavaScript or Python, but I think it useful for students to learn the language which all of these tools are built upon, so that as they learn other languages, they can explore their interpreters and get a better understanding of how they tick.

This. I've been on this site since 2006, and you're all starting to become pansies.

Now if you excuse me, it's 5 in the fucking morning and I need sleep.

VLAs are 100% useless.
You cannot recover from errors, so you have to cap the size (or just know it will be fairly small) in which case you can just use a static array with that max cap:
int a[4096];

why cant u allocate an array on the stack and return it from a function?

Because the stack is reverted to the caller's stack frame the moment the function returns.

And now stop asking dumb questions.

Because in C, arrays are not first class objects, and decay into pointers in most situations. When you return an array, you just return a pointer to the start, and as soon as you leave the function, that array becomes invalid.
It's just a design decision in C, that made it easier to implement and not have a bunch of copying.

thanks

You can in good languages.

Write a Boolean expression that checks whether a given integer is
divisible by both 5 and 7, without a remainder.

static void defineNum()
{
byte userNum = getInput();
testNum(userNum);
}

static byte getInput()
{
Write("Enter a number between 0 and 255: ");
byte val = Byte.Parse(ReadLine());
return val;
}

static void testNum(byte x)
{
WriteLine((x % 5 == 0 && x % 7 == 0) ?
$"Your number {x} is equally divisible by 5 and 7." :
$"Your number {x} is not equally divisible by 5 and 7.");
goAgain();
}

static void goAgain()
{
char? ans = null;
Write("Try another number? (Y/n) ");
ans = Char.Parse(ReadLine());

if (ans == 'y' || ans == 'Y')
{
defineNum();
}
else
{
System.Environment.Exit(1);
}
}

public static void Main(string[] args)
{
defineNum();
}

What would be faster in C++:
1) Use std::map with std::string as key type and search the map for an object
2) Use std::vector and have std::string (which will be unique) as an object member then traverse the vector to find the object

1 is O(log n)
2 is O(n)

Though unless you have a particular reason not to, you should use std::unordered_map instead as its lookups are usually constant.

Measure it.
But if you don't need it sorted you should use unordered_map instead.

install emacs

I use vim, and I am merely 62 KGs (1.78 meters height).

cin >> true;

What are some features you'd like to see in programming languages that you don't usually see?

I'd like more powerful base types with consistent, well-defined operations (Number type, Graph type, State Machine type, Set type), for example, and a separation between semantical relationships (every circle is an ellipse) and syntactical relationships (circle/ellipse problem in OO languages).

We use American units around here, skelly.

/*
* Write an expression that looks for a given integer if its third digit (right
* to left) is 7.
*/
static decimal getInput()
{
Write("Enter a number between 0 and 65535: ");
decimal val = Decimal.Parse(ReadLine());
return val;
}

public static void Main(string[] args)
{
decimal userNum = getInput();
WriteLine($"Your number is {userNum}");

string userStr = userNum.ToString(),
strNum = userStr[userStr.Length - 3].ToString();

WriteLine( (strNum == "7") ?
$"3rd digit from the right is {strNum}" :
$"3rd digit from the right is not 7");
}

So literally Haskell?