Let's hear your Monad explanation

The other billion tutorials weren't enough for me.

Other urls found in this thread:

philipnilsson.github.io/Badness10k/posts/2017-05-07-escaping-hell-with-monads.html
youtube.com/watch?v=ZhuHCtR3xq8
math.mit.edu/~dspivak/CT4S.pdf
koweycode.blogspot.com/2007/01/think-of-monad.html
en.wikipedia.org/wiki/Monad_(philosophy)
bartoszmilewski.com/2014/12/23/kleisli-categories/
youtube.com/watch?v=E8I19uA-wGY
youtube.com/watch?v=XrNdvWqxBvA
youtube.com/watch?v=IOiZatlZtGU
twitter.com/AnonBabble

burrito

Came here to post this.

It's a burrito that wraps your state and returns operators that modifies said state, so your own code doesn't have to.

A monad is just a monoid in the category of endofunctors.

Not that thread again.

MODS MODS MODS

wtf is a monad

A the programming language equivalent of Gentoo

So it's a burrito that comes with utensils?

philipnilsson.github.io/Badness10k/posts/2017-05-07-escaping-hell-with-monads.html

The sword of the gods

WHAT THE FUCK IS THIS SHIT

What are they useful in programming?

Good for encapsulating side-effects and allows you to model many programming concepts in a purely functional manner. Programs have lots of edge cases that keep them from being easily expressed as a function. They can exit early, give errors, etc. So if I want to have error in a pure function I need the output type to contain both the possible error message and the possibly correct value. This makes it difficult to then use that output for another function that might just require the output without the possible error message. Monadic error handling uses monads to chain these functions together in a simple way such that you can maintain the possibility of an error message and have early exits without needing to rewrite all of your functions to take different input. Monads also provide order of execution in a purely functional language. Normally in a purely functional language something with side-effects would be problematic since in a declarative language things have no strict order of execution, however, monads let you compose functions that contain side-effects. Because function composition requires that the input be evaluated before it can be used by the other function, you've basically set an order of execution for functions that may have side-effects. Generally, you can probably think of monads as a special interface that lets you compose functions, so normally functions can be composed to generate different output.

F : a -> b
G : b -> c
F * G : a ->c


Monads come in if you want to wrap the output in some kind of extra context, such as an error value or side effect.

A : a -> M b
B : b -> M c
A bind B : a -> M c


Now the functions can still be composed using the "bind" operator which you can think of as taking the "b" out of the "M b" value and shoving it into the other function.

Okay but how is that different from writing a function like this in python:
def f(a):
b = g(a)
return h(b)

There's an allocation here. That's state.
Also this is python so you probably can't elid the allocation to a single return. So your function won't be pure.

okay then how about

def f(a):
return h(g(a))

Monads are pretty abstract and there's a reason they're infamous for being difficult to explain. Basically Monads are at the top of some other concepts, like functors, monoids, and applicative functors. Which all have scary names but aren't necessarily too complex.

The concept is that data types can be contained in some kind of structure. So, for example, I can have a list of integers. The data is the integers inside the list, and the list itself is the structure. A functor is just a method that applies a function to every element inside of the structure. So, if I had a list of all the numbers 1 through 10 and wanted to add 1 to each, I can use a functor.

> fmap (+ 1) [1..10]
[2,3,4,5,6,7,8,9,10,11]


An applicative functor is very similar except that the function itself is inside of a structure. This might seem a little weird if you're not used to treating functions at data types like in functional languages. Note that this could be any structure, but I feel like lists are probably the most understandable.

>[(+1), (+2)] [1..10]
[2,3,4,5,6,7,8,9,10,11,3,4,5,6,7,8,9,10,11,12]


Monoids are a very simple type that has identity laws and associativity, basically meaning the order you evaluate it in doesn't matter and there exists a value that doesn't alter the value. Addition is the standard example.

(z + (z + y) = ((z + y) + x) -- associativity
x + 0 = x // Identity.
0 + x = x


Monads also follow the monoid rules and behave similarly to applicative functors, You're still mapping a function over a value where both the function and the value are contained inside a structure. The difference is that the first argument is the value and the second argument is a function that takes the value as input and outputs something embedded in a structure. and returns the output embedded in that structure. I can explain further but I'm out of characters.
> f a = [a]
> [1..10] >>= f
[1,2,3,4,5,6,7,8,9,10]

Basically, any type implementing flatmap.

best answer in this thread tbqh

morons that think the associative property in math explains polymorphism

I unironically watched all of this and still have no fucking clue what a monad is:

youtube.com/watch?v=ZhuHCtR3xq8

Monads tutorial for my fellow brainlets.

>A monad is just a type (class) that implements two functions, a wrapper function and a bind function, AND complies with the monadic laws.

The wrapper is just a function that builds a monadic type out of its contained type. The declaration of such a function in Haskell is:
return :: a -> m a

Where 'a' is the value being wrapped and 'm a' is the monad now containing that value (and augmenting it with monadic powers).
In pajeet terms, if the Integer class were a monadic class, it would have a function in this form:
public Integer muhReturn(int value){
return new Integer(value);
}


The binding function is just a function that takes a monad instance, and a function that out of a contained value returns a new monad instance, and returns a monad instance.
Hasklel:
(>>=) :: m a -> (a -> m b) -> m b

Again in pajeet terms:
public Integer muhBind(Integer a, IntFunction muhFunc){
return muhFunc(a.intValue()));
}

However having two functions with that signature is not enough to be a monadic type. It is only a monad if it complies with the monadic laws:
Left identity: return a >>= f ≡ f a
Right identity: m >>= return ≡ m
Associativity: (m >>= f) >>= g ≡ m >>= (\x -> f x >>= g)

Lets see if the type I just pulled out of my ass is monadic:
muhBind(muhReturn(1), (int a) => new Integer(a + 1)) ≡ new Integer(1+1)
muhBind(new Integer(1), muhReturn) ≡ new Integer(1)
muhBind(muhBind(new Integer(1), plusOne), plusTwo) ≡ muhBind(plusOne(1), plusTwo)

Looks monadic enough but I proved shit since I just showed some examples where the laws are true. For this to be a monadic type I should mathematically proof the laws hold true for every possible parameter for muhReturn and muhBind.
>So what is the fuss about monads?
Nothing, they are just a type of Type. You might as well make a fuss out of monoids or any other type.

go read mac lane fag

Think of a monad as a basically an interface in Java OOP that is overloaded into a couple operators, namely the >>=, >>, and return operators. Academics use them as a crutch to convince people that their work is useful and evangelists try to convince people that they are so fucking great because monadic error handling will solve all your problems.

The reality is monads are less readable and the added work of monad transformers makes them a complete fucking joke. Just stick to the usual programming constructs that have proven themselves in software that is actually used to make money and not just as lube to jerk yourself off to your supposed intellect.

That video just goes over the type of the bind operator but doesn't really go over what a Monad is and what its used for. He basically says "There's a thing called a monad and you can do this to it "
m a -> (a -> m b) -> m b

Which I found pretty useful for thinking about using them practically, basically, take a value out of the monad structure, feed it into a function that takes takes a normal value and returns a value in a monad structure, and then return that value. But he really should've built it up from Functors and Monoids after explaining what a data structure is, followed by some practical examples.

Agree, but please leave '>>' for a dedicated monoids thread.

type coin =
| Head
| Tail
;;

type 'a proba = Proba of ('a * float) list;;

let bind_list x f =
let rec loop accu = function
| [] -> accu
| x :: xs -> loop (List.rev_append (f x) accu) xs in
loop [] x
;;

let map f l = List.rev (List.rev_map f l);;

let return x = Proba [ x, 1.0; ];;

let ( >>= ) (Proba x) f =
Proba
(bind_list
x
(fun (x, px) ->
let Proba y = f x in
map (fun (y, py) -> y, px *. py) y))
;;

let fmap f x = x >>= fun x -> return (f x);;

let merge (Proba x) =
let h = Hashtbl.create 10 in
List.iter
(fun (x, px) ->
let opx =
if Hashtbl.mem h x then
Hashtbl.find h x
else
0.0 in
let px = opx +. px in
if px = 0.0 then
Hashtbl.remove h x
else
Hashtbl.replace h x px)
x;
let proba =
Hashtbl.fold (fun x px proba -> (x, px) :: proba) h [] in
Proba proba
;;

let print_proba pp_x ppf (Proba l) =
let l = List.sort (fun (x1, _) (x2, _) -> compare x1 x2) l in
let pp_item ppf (x, p) = Format.fprintf ppf "%a\t%f" pp_x x p in
let internal ppf = function
| [] -> ()
| x :: xs ->
Format.fprintf
ppf "%a%t"
pp_item x
(fun ppf -> List.iter (Format.fprintf ppf "@ %a" pp_item) xs) in
Format.fprintf
ppf "@[%a@]" internal l
;;

let coin = Proba [ Head, 0.5; Tail, 0.5; ];;

let rec coins = function
| 0 -> return []
| n ->
coin >>= fun c ->
fmap (fun cs -> c :: cs) (coins (pred n))
;;

let is_head = function
| Tail -> false
| Head -> true
;;

let () =
let at_least_two_heads_in_ten_coins =
let cs = coins 10 in
fmap (fun cs -> List.length (List.filter is_head cs) >= 2) cs in
let at_least_two_heads_in_ten_coins =
merge at_least_two_heads_in_ten_coins in
Format.printf
"@[%a@]@."
(print_proba (fun ppf -> Format.fprintf ppf "%B"))
at_least_two_heads_in_ten_coins
;;

I've always found monadic error handling to be much more readable when done correctly and the type system makes it much easier to know when you did something wrong. But I do agree people over-exaggerate how useful monads are and using monad transformers can be an ugly mess.

Think of M as a Machine which holds some internal state
You are on machine H, which can hold a reference to machine M at a given state
Both M and H are supersets of a reference machine, P
If machine M has a reference to a state of M, it can dereference this and refer to the state directly
There exists a message to transfer a new state from H to M
M may or may not provide some opaque references to functions available on it
You can provide a reference to one of these functions, or any function available in P. When you do so, M will perform the function on its current state
There exists no standard message for M to tell H its current state (Some machines might provide non-standard means to inform of their state)

That's it.

I should add: If the state of M changes, any existing reference state that H holds to M may be invalidated.

So it's just a wrapper that does error handling?

They're really just a type that satisfies certain algebraic properties that turn out to be useful.

They can safely encapsulate state, IO, etc.
They also have a convenient way of imposing certain sequencing of actions that are easier to reason about. For example, the "do" notation in Haskell desugars to a bunch of >>= and return operations.

That's one implementation of it, yes. Monads themselves are far more general and abstract.

Wait I thought a monad was just a function accepting one argument. Similar to a nilad, dyad, etc. Why did Haslel use an existing and useful work to call their specific type?

It comes from category theory. Hasklel didn't name it.

How is it any better at error handling?

Gets rid of nested if statements racing to the end of the screen. Lets you write complete functions so all you need to do is look at the function type to see that it can return errors. Also makes it purely functional because it has defined output for every input, basically the output range is just the set of possible outputs in union with the possible errors. This is much better than needing to throw some impure exceptions inside of your functions or assigning a portion of your possible output values as errors and constantly checking them.

A monad is just a group without an inverse element you dummy.

something something prevent side effects something something functional languages.
Don't bother with trash like that.

>A common way to interpret phenomena we see around us is to say that agents are acting on objects. For example, in a computer drawing program, the user acts on the canvas in certain prescribed ways. Choices of actions from an available list can be performed in sequence to transform one image into another. As another example, one might investigate the notion that time acts on the position of hands on a clock in a prescribed way. A first rule for actions is this: the performance of a sequence of several actions is itself the performance of an action—a more complex action, but an action nonetheless.

>Mathematical objects called monoids and groups are tasked with encoding the agent’s perspective in all this, i.e. what the agent can do, and what happens when different actions are done in succession. A monoid can be construed as a set of actions, together with a formula that encodes how a sequence of actions is itself considered an action. A group is the same as a monoid, except that every action is required to be reversible. In this section we concentrate on monoids; we will get to groups in Section 3.2.

Page 69
math.mit.edu/~dspivak/CT4S.pdf

Monad is a math object follow some rules, main point of monad is a thing and import part is that do with the thing.

I think this is a good explanation.

It isn't in outcome.

If you want to consider Python in terms of monads, everything in Python lives in the same giant monad. But this means anything in Python can give rise to side-effects and exceptional behavior.

The point of using monads is to confine effectful, imperative behaviour so that most functions are pure and modelling sequential behaviour and effects is explicit.

Remember that map function that you use? What if that worked on optionals

...

obviously haskell just isn't for you
but that's ok, it isn't for anyone else either

I've waited so long to post this

koweycode.blogspot.com/2007/01/think-of-monad.html

But where does the burrito come in?

>encapsulating side-effects
aka burrito

It's a car wash where each time you drive a car through the car can change.

You can put multiple car washes at each end.

...

I'd start here. But now I just realized why I can't into Haskell. I have a fundamentally different philosophical foundation from whence I approach phenomena.

en.wikipedia.org/wiki/Monad_(philosophy)

Your examples have a serious flaw. Monads *have* to be polymorphic. This is a necessary condition, and providing examples that violate this provides a very wrong intuition.

go get some fundamental algebraic math fegget

Monads - some retarded data structure, look "magic" for retardeses who learn retarded "haskell"

I have Nunad, which is bigger than a Monad.

>Polymorphic
The fuck? No. They have to adhere to the monad specification and comply with the monad laws.
Go and have a look at how the monad type is defined in Haskell.

Category theory. Go read a book.

Prelude> :info Monad
class Applicative m => Monad (m :: * -> *) where
(>>=) :: m a -> (a -> m b) -> m b
(>>) :: m a -> m b -> m b
return :: a -> m a
fail :: String -> m a

Oh look, return (and all the other functions) have to be polymorphic (there's no restrictions on a). Go learn about the Dunning-Kruger effect and don't try to sound smart if you don't truly have a deep understanding of the subject matter.

Good explanation. What are some practical uses outside of addition that a monad makes sense for? I imagine there is a broad category of situations you could use them, just wondering if you had any examples.

those are monadic functions, same term, different meaning.

you're fucked if g(a) raises an exception or writes something to a file

it's what happens when your type system is nearing the expressive completeness of your code.

It's a structure describing a sequence of actions. I think that the background behind their introduction to Haskell might give some some insight. Haskell had terrible IO in the past because describing it in a lazy language is just difficult. It's impossible with regular functions because the order of evaluation isn't known. The idea is that your program will instead produce a structure describing a sequence of steps that Haskell's runtime will execute. These steps may return some values (like 'getLine'), therefore it's 'IO a' and not just 'IO'. It turns out (actually IO monad was noticed after other ideas) that there is a lot of patterns that are better expressed with a sequence of steps, like error handling (because you want to do things in a sequence until an error happens) or parsing. Also monads are pretty general so various other things that don't look like sequences might be monads too.

Btw algebraic effects are the next cool thing because monad transformers are dicks.

they let you automatically insert computation between steps describing an algorithm

they are equivalent to objects in javascript

More like the equivalent of factories.

Explain how factories are equivalent to monads.

A monoid or a monad? Monoids are just types that have operations that follow the monoid laws. Monoids are pretty much everywhere once you know to look. They're any binary, associative operation that contains an identity. They're useful to notice because it's easy to combine any two monoids. Typically people will use a monoid to model incrementally processing some data. This makes it easy to combine this monoidal operation with other monoidal operations. Furthermore, because monoids are associative, the order of operation does not matter. This means you can safely run each part in parallel. Another simple example of a monoid is appending lists. Folding is also closely related to monoids.

([a] ++ [b]) ++ [c] = [a] ++ ([c] ++ [b])
[a] ++ [] = [a] -- Appending empty list
[] ++ [a] = [a]


Monads, on the other hand, are a way of operating on data contained inside some structure that follows the monoid laws. The most common monad in Haskell that everyone works with is the IO monad. It's worth noting that monads don't magically make impure code pure, it's just that using an IO monad lets you interface your pure code with the impure IO actions. It's a bit harder to explain but if you think about IO, the monoid laws make a bit more sense. Herre's some simple Haskell IO using monads and its equivalent do syntax. It just reads two lines from the user and concatenates them and prints them back out.

main :: IO ()
main = do
x = (\x ->
getLine >>= (\y ->
putStrLn (x ++ y)))

bartoszmilewski.com/2014/12/23/kleisli-categories/
I actually found this link useful and it wasn't even specifically a monad tutorial

They return objects (code/operations in Haskell's case) that encapsulate state (side-effect).

I think I came out of this thread only SLIGHTLY less confused. I also realized that I have a shitton more to learn. Thanks everybody.

Let's have more of these than consumerist whore generals.

Gonads :DDD

What's still confusing you?

Nothing that hasn't been explained. I lack the building blocks to truly grasp why the monoid rules are significant to a monad and don't completely understand all the underlying terminology.

Going to open up a functional programming book and reread this thread a few times.

an encapsulator for side-effects
/thread

Don't think you need to go through category theory to program in a functional language. You can understand how to use them without knowing any category theory. If you really want to know you can look at "Category Theory for Programmers" or any of that guy's resources on category theory. If you want to get into functional programming you could pick up Haskell and then switch to other impure functional languages like F# or Scala. "Haskell Programming from First Principles" is pretty good.

youtube.com/watch?v=E8I19uA-wGY
This is a pretty good talk for people more familiar with OOP looking into FP.

youtube.com/watch?v=XrNdvWqxBvA
This is a good overview on some functional concepts.

youtube.com/watch?v=IOiZatlZtGU
I found this pretty interesting for an introduction to computability theory and the Curry–Howard correspondence.

taken from haskell's maybe package:

instance Monad Maybe where
return x = Just x

(Just x) >>= k = k x
Nothing >>= _ = Nothing


can be used like
addOne :: Maybe Integer -> Maybe Integer
addOne m = m >>= (\x -> return x + 1)


they're not really that crazy. as you can see, you get short circuiting if you were to pass addOne Nothing. Return puts the value back into context. There's more complicated monads of course but desu if you read the source code they make a lot of sense.

not to be pedantic or anything, but you should have put
return (x+1)

but why do people say monads do state stuff
objects in javascript dont do any state stuff, they're just containers for data

I only got the intuition after I read "Haskell Programming from First Principles". I just didn't expect it to be so simple. Basically monads are structures for which join (or flatmap as wrote) is defined.

What's the point? That (great) book says it's not about side effects at all. If monad didn't exist, even haskell could do side effects in another way.
Monads are just a way to compose functions that generate additional structure.
Basic Example: you have 2 function and both take a list and generate a list of lists. The more you apply these functions the more structure is generated and the more complex you program becomes. But if you don't need your original list to accumulate inner lists as you go on you could take advantage of join (or flatmap) to "join" the inner with the outer structure (in this case lists), provided they are structures of the same type (list of list, IO of IOs)...
That's it. No magic disappointling.

After you get what join does (easy) you can say that for it to be defined some rules must be satisfied, as other anons have already stated.

btw, Haskell Programming From First Principles is the best introductory haskell book hands down.

Monads can do state in a purely functional interface by enforcing an order of operations between functions that contain a hidden state variable. From your perspective the state is changing, but it's actually just calling the same function with an updated state variable as input.

>polymorfic
That is not polymorfism you retard. What you are referring to is called generics. The monad type is generic. There is no polymorfism involved unless we have a class hierarchy and an instance that can be seen as multiple types.

The Integer example was perfectly valid, could have been done with String, could have been done with arrays, whatever.

ITT geneticaly deffected retardeses invide hes own "secret langiage"
lisp gen2 - list processing lagiage for retardeses but it also make you think you not retard

Good explanation

its a fucking ugly hack in haskell to do imperative operations and make haskell usefull