Functional Programming General

haskell.org/
schemers.org/Documents/#intro-texts
scala-lang.org/
elm-lang.org/
miranda.org.uk/
functionalgeekery.com/
clojure.org/
en.wikipedia.org/wiki/Category_theory
jobs.functionalworks.com/#/job-board

Other urls found in this thread:

cokmett.github.io/cokmett/
0x0.st/N7s.txt
blog.sigfpe.com/2009/10/what-category-do-haskell-types-and.html
arxiv.org/abs/math/0212377
twitter.com/SFWRedditImages

Ugly cunt

shameful neglect of R

Name 5 reasons to use R over SAS

1:5 never used SAS
final answer

...

OP I wonder
why do you keep making these threads in Sup Forums evven though you know they'll get shat all over?
Is that because it's a fast board?
I'm more for quality than quantity honestly

Somebody in the last thread said these were the best glasses for functional programming, why? I already bought a pair, I've never coded before but I just want to make sure I'm as well prepared for REAL coding as I can be before I start.

1:5 install gentoo

wearing these glasses will inherent you the knowledge of functional programming wizardry. you'll become a hercules among mere mortals. you'll be spewing out monads from every orifice in your body.

You keep posting this girl. Does she have something to do with category theory?

>you'll be spewing out monads from every orifice in your body.
cokmett.github.io/cokmett/

well meme'd! :^)

Scala feels like dark magic to me.

So powerful, so, so many ways to do things completely wrongly

I was grepping through #haskell logs the other day for something unrelated and read this cut-out gem which I just had to share with somebody else. (I didn't trace all the way to the beginning, just started reading here)

0x0.st/N7s.txt

I felt so enlightened, so intrigued. I still haven't fully comprehended it. But I want to explore more.

I did my best to filter out some of the noise, although I preserved the cringy questions.

if you obtain an abstract algebra background how well will it apply to understanding the math side of things in haskell?

>using the smiley with a carat nose

scala has a pretty clear paradigm set out for how to do things if you take 5 minutes to read some intro documentation

just avoid mutability, functions are first class objects, no primitives, options replace null, learn how to concurrent / asynchronous and what a callback is

Hard to quantify.

Most Haskell packages aren't heavy on the category theory as long as they aren't authored by edward kmett. But having a mastery of abstract algebra helps your understanding of abstract concepts in general.

Like all things in mathematics, knowledge translates and every field is intimately connected. Advanced mathematics is just realizing all of the connections. (Which is especially popular among category theorists)

Advanced Haskellers tend to fit that mental profile pretty well, too - understanding all of the connections between even seemingly bizarrely different concepts and wanting to abstract them out to reusable frameworks (whether those be implementation frameworks or mathematical frameworks).

But you shouldn't go learn abstract algebra before learning Haskell thinking it will somehow help you learn Haskell (or vice versa). If you want the enlightenment, gain intuition about both independently and then understand the connection. Once you understood the connection, you have achieved enlightenment. That's the way I see mathematics at any rate.

There are the basics and obvious don'ts, but there're still many, many, many degrees of freedom to Scala code and code style.

Thanks, this is a beautiful response.

>Advanced mathematics is just realizing all of the connections.
My favorite example of this is the famous proof of Fermat's Last Theorem.

That proof's essential core, the Taniyama-Shimura-Well conjecture, established an intimate link between elliptic curves and modular forms, two seemingly completely unrelated fields of mathematics.

It's this link that allowed us to translate fermat's last theorem into a statement in a different field which was (comparatively) easy to prove.

Several follow ups,

What connections are there between abstract algebra and haskell (that you've came across)?

What areas of mathematics is good to learn that applies/connects naturally to haskell?

What is your math background?

I'm mostly interested in mathematics and see haskell as a potential way to learn interesting math that I can apply to a programming language. I'm just starting out but am interested in logic and algebra independently of any applications.

>What connections are there between abstract algebra and haskell (that you've came across)?
There are a number of connections, both weak and strong, both implicit and explicit.

It's hard to list them all. But a good summary of the most mathematically rigorous connection between Haskell and CT can be found here: blog.sigfpe.com/2009/10/what-category-do-haskell-types-and.html

To summarize, Haskell behaves very similar to a category (intuitively, although the details are somewhat messy as the post discusses). But if we're allowed to “be naughty” and ignore the details, we can gain useful insight out of it.

As a result, many statements from category theory translate to Haskell, although this is only really useful to those who have the necessary intuition to understand the category theory. For the rest, it's little more than the historical reason why monads are named monads and functors are named functors. (Rather than “mappable” or “selectable” or whatever)

>What areas of mathematics is good to learn that applies/connects naturally to haskell?
Type theory and logic

>What is your math background?
Self-taught. I started going down the rabbit hole more deeply as I became involved with Haskell for a few years, and as a result I've ended up knowing a lot about CT, type theory, logic and other areas of math typically connected to Haskell.

I only have some slight formal training in other fields, although they interest me as well. (I'm still studying)

Excellent response, thanks again.

Haskell isn't any better or worse than any other high-level garbage-collected (assuming you want to do math, not learn memory management) language when it comes to mathematics. Programming is a great tool to explore the problem space of math, but learning Haskell won't make you a better mathematician. Doing math will.

For what it's worth, I've benefited a lot from racket lisp. Lisp is a simple enough and straight forward enough language that I can just do explore problems and it gets out of my way. There's much less overhead than in Haskell. But really, if you want to learn math, do math. No particular language will help you learn math more than picking up a good book. Just my 2 cents.

Thanks. I am working through a book on abstract algebra, while learning Haskell on the side over the summer.

>I'm mostly interested in mathematics and see haskell as a potential way to learn interesting math that I can apply to a programming language.
There are definitely some other connections between Haskell and abstract algebra that can end you up in very fascinating rabbit holes.

For example, the idea of Haskell types being, fundamentally and again ignoring the quirky edge cases (i.e. “being naughty”), semigroups corresponding to that of the natural numbers (as in set theory).

i.e. you have the “empty type” (with no members) Void as being the equivalent of 0, the “unit type” (with only one member) () as being the equivalent of 1, the “product type” (tuple / pair) being the equivalent of multiplication, the “sum type” (disjoint / tagged union, Either) as being the equivalent of addition and also the “function type” (pure function) being the equivalent of exponentiation.

For finite sets this can be seen very easily; the number of inhabitants of e.g. (Bool, Bool, Bool) is clearly 2 * 2 * 2 = 8, and the number of inhabitants of Either Bool () is 2 + 1 = 3.

What's interesting is that this still holds true for infinite and polymorphic types - i.e. if you have a first-order boolean formula like 2 * x then you can rewrite that into x + x. (In Haskell terms, the isomorphism (Bool, x) Either x x exists.)

As you go further down the rabbit hole, you start realizing you can work with types as algebraic formulas, even for infinite types. For example, when confronted with the type of a list:

(cont)

No, she's a random model from a website that sells glasses.

she is the goddess of functional programming

this is very interesting... keep posting

data List a = Empty | Cons a (List a)

This is clearly some sort of infinite type, and it's defined recursively. So as a formula, we would write:

L(a) = 1 + a * L(a)

But now that we have a formula, we can start expanding its infinite series:

L(a) = 1 + a * (1 + a * L(a))
= 1 + a + a^2 * L(a)
= 1 + a + a^2 * (1 + a * L(a))
= 1 + a + a^2 + a^3 + a^4 + ...


So in other words, we've shown that this recursive list type is basically like a big sum (union) of the following concrete values:

1 (empty list)
a (list with just one a)
a^2 (list with two as)
a^3 (list with three as)
and so on, for infinite

In other words: a list of arbitrary length :p

It gets even crazier though. Some of these formulas aren't so nice to solve. For example, consider trees structures:

data Tree = Empty | Tree Tree Tree -- left and right


As a formula, this would be T = 1 + T^2. But we can't make much sense of this formula using natural numbers alone.

Solving T^2 - T + 1 = 0 which has no real solutions. But what happens if we're naughty and allow for complex solutions?

Well, now we get e.g.: T = (-1)^(1/3) (as per wolframalpha). This result doesn't help us much in Haskell, but what if we manipulate it a bit? Taking both sides to the power of 6 results in T^6 = 1, which is clearly nonsensical (there is more than one tree!).

But, going one step further to T^7 = T, we arrive at a result that is not only provably true in Haskell but also true in a more profound sense (You need 7 trees to embed the decision space of traversing one tree).

So now the question is, why was T^6 = 1 nonsensical but T^7 = T true? And you enter the wonderful world of arxiv.org/abs/math/0212377 which proves that these results hold only under certain conditions involving polynomials.

And then you can start applying this to other places in which semigroups and smeirings occur

>And then you can start applying this to other places in which semigroups and smeirings occur
Which, I might add, you will start seeing everywhere since you have learned abstract algebra :p

You know that's pure category theory you're doing there?
Objects and functors between those objects satisfying some essential features which let you interpret them in any way thus convert from one category into another by establishing an homeomorphism between their behaviours.
Beauty, isn't it?

>L(a) = 1 + a + a^2 + a^3 + a^4 + ...

Incidentally, I glossed over something here. Technically, I used exponentiation in this notation.

So what I called a “list with 3 elements”, a^3, or a*a*a (as our product notation would intuitively allow us to understand), is really the same as (Index3 -> a) where Index3 has three elements.

And this is another important connection to a deep area of CT known as representability. If you can rewrite your entire structure T a into some equivalent of (Index T -> T a) then it's a representable functor, which carries with it many other nice properties (e.g. allowing you to auto-generated memoization code, or perhaps auto-generate test vectors for your test framework).

tl;dr this stuff is really nice desu

>(Index T -> T a)
sorry I meant (Index T -> a)

>no Erlang
The fuck is this?

No love for JS either.

It's like they actually enjoy the enjoy being unemployed.

>Objects and functors between those objects satisfying some essential features which let you interpret them in any way thus convert from one category into another by establishing an homeomorphism between their behaviours.
Really, one of the important things to realize when learning CT is that the objects are red herrings. People like to look at objects - they're nice and concrete, “they're just sets”, etc.

But the objects aren't the interesting bits, the arrows are. The objects are just like a “type system” for the arrows, they exist to define which arrows you can and can't combine.

Functors don't “modify objects” in as much as they preserve the structure of the arrows - that is what the core of CT really is.

It's all about the diagrams. And a functor in one category is just the equivalent of looking at a diagram in one category and mapping (“projecting”) this onto a diagram in another category. Really, the best understanding of CT that you can gain is generally by looking at the diagrams. (That's why CTers love them so much!)

A natural transformation, for example, is just saying that if you take the same diagram in the source category, and map these to two different diagrams in another category - the two diagrams are “linked” there (in a way that commutes - i.e. one is a ‘shadow’ of the other, if you will).

Great posts.

Who's the cock rocket?

Incidentally, since you asked about ways in which CT knowledge can actually improve your Haskell:

The obvious fact that monads were inspired by CT aside, Monads themselves can be reinterpreted with some higher-level CT knowledge: adjunctions

Namely, monads themselves are just glorified adjunctions. (Take any two adjoint functors, compose them like G∘F and you get a monad, guaranteed) Working out the details of why this is the case is interesting in its own right but not the scope of this post.

But what's fascinating here is that while G∘F gives you a monad in Dom(F), F∘G gives you a comonad in Dom(G)! (Naturally, everything dualizes) So take any old stock haskell monad, strip it apart, and you get a comonad in the “other category” of the adjunction that gave rise to the monad. Normally these aren't terribly useful, but it *does* show up from time to time. (e.g. Hask^op)

One notable example are the rich and deep functors (,) s and (->) s, the product functor and the function (or “reader”) functor. (If you allow my abuse of notation, (s, ) and (s ->). These have many wonderful relationships, mostly arising from the fact that they happen to be adjoint. But since both are also endofunctors on Haskell, this means we can “see” the monad/comonad construction directly:

(s ->) ∘ (s, )
If we plug some type ‘a’ into this monad, we end up with:
s -> (s, a) which any seasoned Haskeller will immediately recognize as the good old State monad (basically the monad that allows you to embed mutable/imperative code in Haskell)!

So as we know, if we compose it the other way around, we expect a comonad in Haskell, and indeed we get (s, s -> a) which any even more seasoned Haskeller will immediately recognize as the good old Store comonad.

So at some level, any monadic code that uses states can be dualized to comonadic code that uses store comonads

>So at some level, any monadic code that uses states can be dualized to comonadic code that uses store comonads
The details of what this means in practice are left as an exercise to the reader

>Namely, monads themselves are just glorified adjunctions.
This is actually a trend in CT. Every time you go up the abstraction ladder, you realize everything is just a glorified version of some higher level abstraction.

CTers always start out with basic concepts like products, and then later they find out all their products were actually just limits, etc.

Then later they find out all their limits, monads etc. were just adjunctions

Then later they find out their adjunctions are just isomorphisms in a 2-category

and so on it goes, up the abstraction ladder, down the rabbit hole

the ride never stops

is node.js functional programming or not lads

>tl;dr this stuff is really nice desu
Some more pointers for further rabbit hole inquiries:

What is the analog of a “difference” type? (-)

What is the analog if a “quotient” type? (/)

Both have good interpretations that you can apply with varying levels of success to Haskell and get real, useful abstractions out of them

And finally:

What is the analog of e.g. differentiation or integration on types? (if we're naughty and treat them as complex functions)

This also has a very good answer, and once that is both beautiful and simple and very, very useful in the real world: differentiating a type gives you a zipper for that type. (And zippers are *really* useful)

And since we know how to do symbolic differentiation on the semiring of reals, this means we also know how to generate a zipper for any haskell type

Are your functions as pure as your waifus?

node.js isn't a programming language, it's a framework

woah this is some pretty awesome stuff. i haven't ever forayed into the more advanced parts of type theory, especially from an academic perspective, but it seems like there is so much information waiting there for me to pick up. any recommendation for a nice book to delve into this kind of stuff?

Scheme is not functional and Clojure is just Java with, well, clojures.