What are the limits of neural networks and deep learning, regarding making waifu crap?

What are the limits of neural networks and deep learning, regarding making waifu crap?

like, can I code and train a neural network to make animated waifu porn?

Other urls found in this thread:

hydrusnetwork.github.io/hydrus/
illustration2vec.net
twitter.com/NSFWRedditVideo

Better translations would be nice.

before you try that, could you code an autotagger for the existing waifu porn?

There is not sufficient samplesize. First you have to produce a whole lot of waifu porn including transitional stages. Then there will be to cost issues because 2D girls still have multidimensional features. One could come up with an all that changes waifu hair/eye/dress Colors but no additional angles and positions.
Do you mean VN waifu or animation ones btw?

yeah probably if you had a supercomputer advanced enough to simulate enough neurons. neural networks are used for function approximations. as in you have a function that takes arbitrary input(s), and outputs arbitrary value(s). a function is like y = 3x^2 + 5 but instead you don't know exactly what the function is or does.

neural networks are functions whose outputs depend on how each single neuron affects its inputs. send values to input neurons, those neurons will output to more neurons in the network through a chain until you get to the end. those output neurons give you the output of your function.

each neuron's role is very simple, to change the value of its input, based on its given weight. as i understand it all the weight does is scale its input by some value. e.g. a weight of 0.5, if given an input of 500, 500*0.5 = 250. like this, you can string many neurons together each with different weights, and get complex reactions.

there are an unlimited possible combination of results if you change the weights. if you had 1 million inputs, each being a pixel, and 2 outputs, one for if a dog is in the picture, and one for if a cat is in it.

at first, the neurons weights are all just random, so if you give it a picture of a cat its many inner connections will react and just output something without meaning, and be wrong. but you can use something called 'backpropagation' to change the weights of each neuron, in turn changing the output values. the idea is that if you change the chains of weights inside the network in just the right way, the network can 'learn' to respond to images or whatever and output the values that you want. it's effectively just a very complex function that takes some inputs and gives outputs. tweak the weights in just the right way, and you can approximate a function that takes a picture as its input, and outputs true or false if a cat is in it or not.

som1 pls explain backpropagation i dont understand

hydrusnetwork.github.io/hydrus/

I don't see why you couldn't make a computer draw an Anime or some hentai. It's a subset of teaching a computer to draw in general, something people are already working on.

The problem is making it draw a waifu - sure it could give you a moeblob or something, but can it draw a series of images conveying a complex enough character that fat neckbeards hearts will go aflutter? What will it take to get a man who's given up on real women to tell your entirely computer generated girl he wants to protect that smile?

You could try just feeding a shit tonne of hentai to a generative adverserial network: github.com/Newmu/dcgan_code

...and see what it makes. but DCGANs are fairly new(2014 I think) and very tricky to train.
Also no one I know of has successfully gotten one to produce good images at a resolution higher than 256x256 or videos. Wait a few years...

web scraping.

Backprop is just using the chain rule from calculus to compute the derivatives of the error wrt network weights.

It's literally just the chain rule applied to functions with matrix multiplies and nonlinearities. If you've done calc 3 and linear algebra you should be able to derive the equations yourself.

i took calc 1 in high school. forgot it all though. its hard how each course builds on everything from previous courses. if i learned up to calc 3 by that time trig and calc 1 would have gone out the other end

>What are the limits of neural networks and deep learning, regarding making waifu crap?
Artifact removal is pretty decent, like the removal of JPEG artifacts as seen here.

However, “sharp” upscalers will generally end up looking unnatural no matter how good you make them, because the information that our eye expects to be there is simply missing.

What are your thoughts about wetware neural networks? Could we recreate the structure of human brains for artificial purposes and train/use them in any meaningful way?

Seems like the problem will be threefold:

1. Figuring out how to interface with them
2. Figuring out how to train them well
3. Figuring out how to convince the hippies to let you breed and harvest human brains for industrial purposes

Basically the chain rule lets you get the derivatives when you have a function of a function e.g find h'(x) when h(x)=f(g(x)) or h(x)=sin(x^2+5)

Neural nets with m layers are just a sequence of m 'functions-of-functions', so you can use the chain rules to get the derivatives of the error wrt everything in the network.
The error derivative of the weights in any layer will depend on the derivatives in the layer above, so the error signals must 'backpropagate' through the network.

I don't know much about neuroscience but I don't think we understand wetware well enough to be trying that yet.

>That image
Sorry my autism.
Here's an actual photoshopped upscale.
Sure it's not as good as a custom program made to upscale and only upscale, but stop spreading disinfo.

Also, this is the best quality I've gotten from that website with the same image.

>Wow look at that, both are blurry

Both Photoshop and waifu2x look bad.
Not even into this stuff, just don't like blob images.

it's no secret that waifu2x is garbage

>There is not sufficient samplesize
I have three million tagged images of waifus stored locally.
I could expand that if I wanted.

Maybe try using an image that isn't tiny artifacted garbage first

garbage in garbage out, no algorithm that we can develop at this point is advanced enough to fill in details that were never there in the original image

waifu2x is always garbage out.

Even when I'm not even using any plugins I can get decent upscaling in PS.

Our brains are just neural networks basically so they can do as much as we can do and more. It's just a matter of how much training data and processing power you can throw at it

it'll be a decade until waifu2x can begin to look acceptable.

Well the source is blurry

No way, what settings did you use for that?

Never mind I see now it's fake

he downscaled the frame from the actual source of the 120x120 jpeg.

I upscaled it with smoother bicubic, then used a edge tool and smoothed out the artifacts.

Wrong.

this waifu2x thing is pretty amazing

confirmed for blind

compared to the original, its much better
also oops with that name

The eyelids make it obvious. Those separate lines above the eyes in the original 120x120 don't exist, yet in your upscaled version they have appeared out of nowhere.

lol @ the major warping and artifacting

>then used a edge tool and smoothed out the artifacts.

do it with straight resampling and global filters

I dont see anything wrong

see

The image op posted has been through jpg artifacts which the original image didn't.
I found the 240x240 image without the artifact and upscaled that, just like the person that did that initial upscale did.
The 120x120 is edited, not even the same font as the rest of the text.
Do some research before assuming things.

>Do some research before assuming things.

Yeah sure, Next time I'll do some research on a random image posted to Sup Forums. I'm sure there are many published papers I can look up dealing with replacing the number 240 with 120. I know one guy won a nobel prize for proving that sometimes people post lies on the internet

So instead of cheating in the particular way, you cheated in another. Kay.

No, I posted the result of what it'd look like if the image wasn't tampered with while keeping it at the 120 pixels since that's what the topic was about even if it's false.

One should never lie, internet or not, even if it's tempting.

>train a NN with lots of screenshots
>generate new ones
>post in screenshot threads

Doesn't make sense. If that's the case the waifu2x image in OP would be much better. Here's what I got trying to process the one in your supposed original one. Much better than the one in OP.

And this isn't even the best it can do, because the original in your image isn't 240x240, it's like 187x187. If you post the supposed 240x240 image I'm sure waifu2x will do fantastically.

Not him but other than the D of the AMD logo on her shirt I see nothing wrong either.

Sure it might, but that's not the point.
The point is that the one that was supposed to be the photoshop one looks as if it's upscaled in MS Paint

To prove that you will have to show a better photoshop upscale based on the 120px image in OP. Because that is what the waifu2x is based on, as I proved. Otherwise it wouldn't be a fair comparison.

This one exists too, hang on.
illustration2vec.net

this one i upscaled from a picture of the panel next to it that was like 300px

>Moving the goalposts
Op image shows upscales from a 1202 file.
You went and took a 2402 file and upscaled that in an attempt to prove photoshop's upscale to be superior. Using a superior source.

The kicker!
Waifu2x does more than just upscale. You can also use it to only remove artifacts.
The comparison you took the source file from is comparing artifact reduction, not upscaling.

>You can also use it to only remove artifacts.
yet you can't remove the metric fuckton of artifacts and warping waifu2x introduces

do you need an 800x600 CRT from the late 90s to actually think waifu2x looks good?

Do you fear neutral networks or something?
This upscaler can learn to do better.

Also fuck Sup Forums and it's filtering of Unicode.
*120^2 *240^2

im actually upgrading my psu so I can fit my old gtx 560 ti alongside my r9 390 so I can take advantage of the cuda cores needed to run waifushit

there's an OpenCL implementation, runs stupid fast on AMDGPU

but i like saying the words cuda cores

How do you guys set up these deep learning things, anyway?

I double click the icon