Old thread: What are you working on, Sup Forums?
/dpt/ - Daily Programming Thread
...
Learning category theory/type theory with purescript
Is there a Scheme-tan?
I will also accept a Lisp, but it's not ideal.
You should learn how it relates to logic as well
>You should learn how it relates to logic as well
That's redundant. All type theory textbook has to cover the relation, and you can't even have type inference without proof theory.
Found a trivial challenge in a codeshare posted a little while eariler.
>Write a program that asks the user for a number n and prints the sum of the numbers 1 to n
>Modify the previous program such that only multiples of three or five are considered in the sum, e.g. 3, 5, 6, 9, 10, 12, 15 for n=17
A little bit of background knowledge in math should suggest that the answer to the first part of the problem is to simply calculate n * (n+1) / 2, and the problem is solved in O(1) time. For the second part, however, I found an interesting approach. The sum of all multiples of k in the range 1..n inclusive is calculated as follows:
1. let p = n / k
2. let q = p * (p + 1) / 2
3. Return q * k.
Essentially, we are dividing the summation of multiples by k to get a sum of 1..n/k, re-applying the sum formula, and then multiplying by k again. From this we create a function
static inline unsigned
sum_multiples(unsigned n, unsigned k)
{
unsigned p = n / k;
unsigned q = p * (p + 1) / 2;
return q * k;
}
And then we just sum all of the multiples for 3 and 5 and subtract their union (multiples of 15).
unsigned total = sum_multiples(n, 3);
total += sum_multiples(n, 5);
total -= sum_multiples(n, 15);
I am bored. The codeshare was here:
Someone make it alive again.
Consider a multi-valued path config, much like LD_LIBRARY_PATH? You'd need to factor in a resolver but a simple one shouldn't be too hard even in C. How many files are we talking, and are there subdirectories that might need to be enumerated and unioned?
MY_CONFIG_PATH=/home/pajeet/project/faggot/client_configs/:/home/pajeet/project/faggot/configs/:/etc/configs faggotd --port=9000 ...
Hm, I don't quite get the point. I hate it when I use a program and want to manually change some config files, but the files are not in the directory of the program. I just want to have something that does not need an installer - a simple directory you pass around. So everything, like the .so files and the config files will be in the directry that the executable lies in.
How's your Haskell learning going, user?
Bit user, I'm learning to be a wizzard, not a mathutist
Otherwise pretty good
*but
Ofc I forgot the pic, I'm tired
Bretty gud. I got kinda used to reading Core and maybe am ready to dwell into GHC's fuckhuge codebase.
On the other hand I'm still unemployed and these Haskell timesinking sessions obviously don't help but I just can't stop obsessing over it.
what? it's just interesting
You can't learn type theory without knowing about the Curry-Howard correspondent anyway, because every type theory book covers the topic at least at a high enough level to bullshit your friends that "hurr computer programs are constructive proofs of mathematical theorems", that's why the other post said that it's redundant.And yes you also can't even learn type theory without logic and proof theory.
tl;dr just open a fucking textbook and stop bullshitting about things you don't actually understand.
what the fuck is your problem
i mean actually investigate these fucking things man, did you know that reversed composition is transitivity for (->) ? No? Pick up a textbook nigger
>transitivity
You mena transitive. And compositions in general are transitive relations. What's even your point?
You know, if you said something like ``continuation passing styles are double refutation'' or something I'd actually buy your bullshit man.
C# is the most versatile language.
Java >>>>>> C#
shit >>>>>>> Java
ACADEMIC QUALITY CLOSED TRANSITIVE RELATION[/cpde]
transitivity is the state of being transitive, composition is a proof of transitivity, it isn't "transitive" because transitive is an adjective and not a noun
Because your "transitivity for (->)" doesn't even make sense. So you mean, yeah, function composition is the proof that (->) is a transitive relation? What's the big deal when this is such a direct and basic result that it's usually given as a textbook (no pun intended) example of program-as-proof-for-type concept?
What do you mean? It's interesting to work with this stuff and mess around with types.
id is reflexivity for (->)
(this and transitivity applies to all categories)
Reflexivity for Const would be (forall a. Const a a), which is isomorphic to (forall a. a) which is Void, so you can write these things in typecheckers, it's a proof that const isn't reflexive
Eh, on the second thought, I think I actually get your point now. I guess you read about this example in some blog and found it cool and started to want to know more about type theory. That's great I guess, already much better than the common uncurious, theory-dismissing """programmers""" in fact.
This is laughably trivial in Ruby
puts 1.upto(Integer(gets)).inject(:+)
How is Haskell better than Lisp or Scheme?
I did this sort of stuff when messing around writing type checkers, I was doing it for Iso because that seemed obvious but then I realised it could be generalised.
Strongly typed, pure.
If you don't like that stuff then you won't think it's better.
What does it mean to even be pure?
Also, does SICP teach you about making your own compiler?
I haven't read SICP, and purity means no side effects, referential transparency, (very pure) no mutability, etc.
So your effects are encoded in your return type
>What does it mean to even be pure?
To to precise, "referentially transparent". That means you can substitute for example (f a) with its definition without changing the program's semantics. That means (f a) doesn't have any side effect: mutating shared memories, doing IO, nuking New York etc..
>Also, does SICP teach you about making your own compiler?
The bulk of the book is about writing different evaluator (read: interpreter) for different model of computation, and it does touch code generation in the last (I think) part of the book, after implementing the register machine.
>What does it mean to even be pure?
Fun and interestimg to wotk with, but very unporducive in most areas.
> Also, does SICP teach you about making your own compiler?
Kinda, but not really. It teaches you about a scheme interpretter. It's still worth a read tho
A pure language seems very resource intensive since you have to reproduce a new object every time you perform a mutation on it.
Now, personally I would have used gets.to_i, but here's the thing -- I'm not talking about an O(n) solution. I'm talking about an O(1) solution to both parts of the problem.
>A pure language seems very resource intensive since you have to reproduce a new object every time you perform a mutation on it.
There's this thing called structure sharing, meaning you don't make an entirely new structure if you don't have too. For example if you insert a new element into a binary tree, any subtree that doesn't change is reused as the new tree's subtree, and that means most of it. You might also heard of lazy evaluation, which means caching and not fully evaluating value if not needed to. That's why GC is important in FP languages too, manually managing this would be a huge mess. Also as you deconstructing a lazy structure, the GC would go behind you to collect whatever value you used and left behind, so if you know what you're doing you can even process infinite data structures with constant space.
>still 1 year until GHC gets lineartypes
>even then it will just be typechecking and won't be used by the GC (though you can still use it for, e.g. performant safe mutatation)
Referential transparency (and type system's properties, depending on the particular type system of course) can make a lot of guarantee to do all kinds of crazy code transformation of optimization (even the one actually doing mutation under the carpet) to make up for it. A classic and really basic example of this is
map plusOne (map plusTwo (map plustThree [1,2,3,4,5..2^64]))
Now all those 2^64 elements lists don't look particularly resource friendly, but if you take into account map's property that
forall f, g: map f . map g = map (f . g)
-- This won't hold if either f or g has side effect/is affected by side effect.
You'll can transform the above code into this
map plusOne (map plusTwo (map plustThree [1,2,3,4,5..2^64])) => map (plusOne . plusTwo . plusThree) [1,2,3,4,5..2^64]
No more needless resource allocation and garbage collection, and it's real fucking easy to achieve to boot. The 2^64 elements argument would be deconstructed and garbage collected element by element as the map goes, so you don't have to worry about two 2^64 lists in your heap after the call.
>justunsigned, not even unsigned int
For some reason hardware guys at my work all write their C code like this.
Why do we have so many normies?
Talking about trivial stuff like writing a compiler as it is hardcore hacking.
Do you also get surprised when they use short and long instead of short int and long int?
You guys just don't know when to move on.
How's the dependent type proposal going again? My body is not ready for this and by my body I actually mean my peanut brain.
Understanding dependent type is one thing, I can't seem to naturally apply it or think of cool stuffs other than >le n-sized vector to do with it.
Well that is to say, graphs in pure FP is still a huge pain in the butt.
Still being worked on, I think it's like 3 or 4 years away?
You should mess around with Idris.
>Le sad reddit man
Fuck off.
There is literally no reason to write 'int' there, so why not just make your code shorter?
I'm dabbling in python to make an app with kivy.
However, I'm doing it the OOP way, since that's all I've ever programmed in (with the exception of small python/bash scripts).
Is this a good way to program a small/medium sized project in Python?
Because it feels... wrong
Isn't class the only real way to define new type of data structure in FIOC? If you're particularly anal about it, treat class like a record and don't do inheritance at all. If you're REALLY anal about it, encode your data structures into a combination of tuples and dictionaries instead, which might be viable because Javashit does alright with everything being dictionaries.
>OOP
>feels wrong
because it is wrong
Got a task to make some kind of server monitoring software(make http requests, parse response, probably do some simple assert tests) and can use it as a chance to learn a new language, would python be alright for something like this?
>Got a task to make some kind of server monitoring software(make http requests, parse response, probably do some simple assert tests)
Sounds like a match for Erlang
>can use it as a chance to learn a new language
Sounds like a match for Erlang
>would python be alright for something like this?
Sounds like a match for Erlang
>2017
>Erlang the concurrent functional programming language originally implemented on top of a logic programming language is still unironically one of the few actual OOP according to Alan Kay's definition
You sound angry.
Didn't realise Hillary's campaign team also worked for Bjarne
Named tuples are pretty much structs anyway.
Alternatively Node.js makes the tasks you require (http request/response) easy, but you would have to deal with >javashit which might or might not be a pleasant experience. Love it or hate it you have to admit javashit is shit.
Python oop is especially bad oop because it's not statically checked.
>so how should you program in python then
No clue. I imagine a procedural style would be relatively successful.
It's not quite that bad but even a language like C++ struggles with making themselves fast and avoiding unnecessary copying/construction of objects so yes certainly these pure functional languages are only fast in theory.
Well it's like comparing apple and orange, because while FP can safely reuse objects because they don't ever change anyway you can do destructive update in imperative languages and do away with the whole persistent things.
That is to say in some important cases orange is better than apple, for instance implementing an efficient graph representation in pure FP while possible is still a gigantic nigger dick in the anus, and it's infinitely easier in the present of destructive updates.
when GHC gets LinearTypes you'll be able to write pure, typesafe destructive updates (by ensuring that what you destroy isn't used elsewhere)
I think I'll remain negatively dispositioned until I see evidence that I shouldn't be.
Don't worry, no one cares about your opinion anway. The world is just gonna move forward regardless.
>nobody cares about your opinion
Likewise. Why state the obvious? Anonymous posts hold no rep and nobody pretends they do.
Sounds like you just wanted to insult but that's even more silly.
>Sounds like you just wanted to insult
No, I just want to passively-agressively stir up that weird feelings in your chest without resorting to actual insult or other rudeness. That's the sole reason why I'm still sticking to this progressively shittier by the day image board.
>>Modify the previous program such that only multiples of three or five are considered in the sum
that's more basic than fizzbuzz
how about
> Find a pair of elements from an array whose sum equals a given number
>passive aggressively
user I don't think you know what that means. A direct attack on the person as you would have made is clearly no passive aggressive.
>I'm here to make people mad
I can't say I've noticed you dude.
It's not going to make everything hyper performant.
The first phase won't even change the GC or anything, it'll just add the linear type checking.
(Which will allow people to write more performant code, but still not work with the GC)
The point is purely that you can write code that you know can safely be implemented mutably.
Wow that wasn't even an attack at all. You seem triggered.
>I can't say I've noticed you dude
Sure, reply to me more sempai.
i wonder if you actually believe you triggered him.
and i know you'll try to say im samefagging to lick your wounds too.
user before you go wasting your time spamming Sup Forums I'll have to inform you that number of replies is a terrible metric for annoyance effectiveness.
Yeah I wasn't expecting that they'd get there quickly. I'm just kinda sick of all the marketing wank in programming languages. People overstate the usefulness of their new features all the time.
I wonder that too. He probably just likes posting.
...
It's certainly a useful feature. But it won't make Haskell as fast as C. That's my point.
It can be used not only for making this kind of destructive update pure, but also for writing server APIs and stuff that have to make guarantees that old handles aren't re-used and stuff.
Why is working with text so hard in C
do not make GUIs with python, it's a huge pain in the ass
Because you're not using the standard library maybe?
It covers a lot of stuff.
It's certainly not its strong point, but you just need to git gud.
Protip: it's hard in C++ too (unless you use a shitty third party lib that changes its syntax from time to time)
they are old languages that are basically science projects, that said fuck you to real world developers
codersnotes.com
Funny how this guy's description of HolyC makes it sound so very similar to JAI. Even down to compile times (though JAI probably does more work optimizing).
But what did he mean by this (pic related)? I don't get what the author is saying here.
I don't know of circumstances where you couldn't call a function without grabbing the return value. I'm really curious in how I could do that now because it'd be a nice way to ensure contracts.
I working with a SQLite database in Python (sqlite3 module).
Are commits supposed to take 1.1 second on average? If so, how do I avoid committing often while preserving the ability to rollback without data loss if an exception rises?
>inb4 use redis
don't tempt me
#include
#include
void* run_thread(void*);
/* should output "hello" 5 times */
int main(void) {
int nthreads = 5;
pthread_t tids[nthreads];
for (int i = 0; i < nthreads; i++) pthread_create(&tids[i], NULL, &run_thread, NULL);
for (int i = 0; i < nthreads; i++) pthread_join(tids[i], NULL);
return 0;
}
void* run_thread(void* unused) {
printf("hello\n");
return NULL;
}
$ ./main
hello
hello
hello
What am I doing wrong?
why are you returning null?
It prints 5 times for me.
Maybe you forgot to recompile a program or you're running into a race condition (stdout is a shared variable).
1. write test
2. write just enough code to make test pass
3. refactor
4. commit
5. goto 1
pthread functions have to return something.
If you don't care about it, you would typically just return NULL.
Because i'm not using the value returned by thread function anywhere.
In real program I write to a pipe using write and still have the same problem.
>in Python
lol you'd think disk i/o would be the bottleneck
>I use write()
You need to do more than that.
Reeeeeeeeee
Literally the only Lua RSA library I've been able to find can only do 256 bits
I want to have a choice between 256 bits and 1024+ bits
SQLite is an in-memory database.
Patch it. Should be trivial.
>Reeeeeeeeee
>>>/reeeeedit/
this
I don't know what to make of this.
But the commits go to the disk.
Isn't write() atomic?
>The adjustment of the file offset and the write operation are performed as an atomic step.
What else do I need?
>about to dump Code::Blocks because I'm sick of waiting 2 minutes each time it crashes because of autocomplete plugin
>be so addicted to autocomplete plugin can't turn it off
>decide to give it one last shot for old times sake
>checkout svn
>build it, no big problems besides installing a couple of libs (gtk2, wxwidgets)
>starts up blazingly fast
>now I only have to wait 1 minute when it crashes
I'm sorry I doubted you.
>Isn't write() atomic?
Up to a certain size, yes. I want to say 4kB, but I'm not certain.
Just to clarify, what operating system are you on?
It's some of the most bloated code I've ever seen, there's no way I could identify everything I'd have to change to make it work with larger key sizes.
GNU/Linux.
I write just one byte to synchronize another process with all the threads.
debug it like a big boy and contribute a patch
shit wrong pic
It's C++. I only know C.