Write a program in your language of choice that does the following:

Write a program in your language of choice that does the following:

- load an image (source image)
- get a list of all the colors in the source image
- create 2 blank images, (image1 and image2)
- draw a random polygon or circle on image1 using a random color from source image
- compare image1 to the source image
- if it's more like the source image than image2 copy image1 to image2, if not copy image2 to image1 and continue drawing more random shapes and comparing
-post the results

Other urls found in this thread:

github.com/fogleman/primitive
instagram.com/p/BK4I8mFATrf/
my.mixtape.moe/qcccfz.png
ghostbin.com/paste/kuay8
pastebin.com/LH4fgck6
twitter.com/SFWRedditGifs

>load an image
Define "load"

open an image file on your hard drive

Image type: png 1500 x 1000
Evolution: 999469/1000000
real 9m56.853s
user 10m2.981s
sys 0m2.345s


come on faggots. lets see what you can do

We're not doing your homework for you user.

>Search github
>Download github.com/fogleman/primitive
>Apply to ffmpeg exploded sequences
>Make dank vaporwave shit

instagram.com/p/BK4I8mFATrf/

import stupid_ass_anons_chellange as nigger
nigger.run()

define "compare"

- you must use ed as your editor

>if it's more like the source image than image2 copy image1 to image2, if not copy image2 to image1 and continue drawing more random shapes and comparing

What?

double buffering to maintain a temp last good copy it sounds like

no

no

Is that bailey jay?

Ok so you have two copies of the same image now. What do you do next?

>do my homework for me Sup Forums

fixed that for you.
Also
No

Can someone please explain how you compare the source to the temp image? Conceptually, what does that mean?

loop through all pixels and compare colors

It is to check if your random changes to image1 is closer to image2 or the source image. If it is closer to the source image, it means you made a random change which is "good". If it is close to image2, it means you made a "bad" change. Because the whole idea is to "mimic" the source image with a polygon.

That sounds slow as hell

How it implement it?

Like Said. You can loop and compare pixels.

There are probably better methods

is there another way?

so do every other pixel, ever 3rd pixel, etc

Maybe something like the amount of pixels that are within an arbitrary distance from the color values of the source image's pixel at the same location.

Compare pixels (or averaged values) covered by the new shapes only. like post process antialiasing

Instead of every one, just average the colors of a 4x4 or 8x8 pixel grid and compare those instead. Then once it's an exact match make the polygons smaller, and the grids half as large.

The thing is. If you add a circle with r=10. to a random location on image1. So that will be around 100 pixels in size. I think you could get away with only comparing the same 100 pixels in your source image and image2? So you do not need to compare the whole image every time. only the few pixels you changed on image1

That virgin killer sweater?

yes

Guys I think you needn't compare pixel by pixel. The whole idea of comparing the images is finding the average color of the newly drawn area on srcImg. The "closer" image is the one whose shape color is closer to the color average it covers.

>color average it covers.
*color average of the area it respectively covers on srcImg

what?

ill let it run for a bit, but wouldnt it be better just to take an average over a specified region or some shit?
seems unneccessary to actually randomize position and color from the reference image

>tfw hyper-intelligent artist now

retard me didnt read the entire instructions

Looks cool. Do another one.

can you post the original?

ok
its too lewd, sorry

i thought she was nagato

/*
src = source image
temp = temporal image with random figure added
result = approximated result image
w = weight function
*/
int w= 0;
for(int i=0 ; i < width*height ; ++i)
w+= src[i]==temp[i]?1:-1;
if(w>w_result)
{
copy(temp,result);
w_result=w;
}

What language did you use?

heres the original pic
my.mixtape.moe/qcccfz.png
C++ with opencv, heres the code
ghostbin.com/paste/kuay8
for some reason it doesnt do the bottom of the image, probably mixed up rows and cols somewhere

Wow. Thank you. I didn't even have to ask :-)

Fuck off with your homework

...

What's better: dots or pies?

Oops

loser detected. why are you on this board if you cannot read or understand anything at all?

cute
i think pies is better

I want to **** 2B

not really its just h x w calculations. so for a 1920 x 1080 image 2.07 million calculations.

i calculate euclidean distance for a square covering the newly filled circle
hamming distance should work too

better just to compare the newly filled region, instead of 2.07 million calculations you get 400 if you have a radius of 10, ideally around 314 if you only look at the circle

im sure you can do it in few matrix multiplications by using matlab or some sci library for python

I am not this far in learning C yet :(

solved why it didnt worked for the lower part of the image
opencv devs are retards, cv::Point is trash for reverse order of rows and cols

for(;;){
int randr = rand() % img.rows;
int randc = rand() % img.cols;

circle(fout, Point(randc, randr), rad, img.at(Point(rand() % img.cols, rand() % img.rows)), -1);
firstSim = 0.0;
secondSim = 0.0;

for(int k = (randr - rad < 0) ? 0 : randr - rad; k < randr + rad; ++k){
if(k == img.rows) break;
for(int l = (randc - rad < 0) ? 0 : randc - rad; l < randc + rad; ++l){
if(l == img.cols) break;
firstSim += dist(img.at(Point(l,k)), fout.at(Point(l,k)));
secondSim += dist(img.at(Point(l,k)), sout.at(Point(l,k)));
}
}
if(firstSim < secondSim) sout = fout.clone();
else fout = sout.clone();

imshow("win", sout);
char ekey = waitKey(1) & 0xFF;
}

...

pies vs dots

Doing it with straight lines looks fucking terrible, as you would expect.

Sped up the slow comparison by processing every line of the images in parallel:

func diff(a, b image.Image) int64 {
imgw := a.Bounds().Dx()
imgh := a.Bounds().Dy()
ch := make(chan int64, imgh)
go func() {
for y := 0; y < imgh; y++ {
go func(line int) {
var dif int64
for x := 0; x < imgw; x++ {
aR, aG, aB, _ := a.At(x, line).RGBA()
bR, bG, bB, _ := b.At(x, line).RGBA()
dif += int64(aR - bR + aG - bG + aB - bB)
}
ch

how do you decide the rotation of the line?

The rotation is implicit; the line is just specified by two random endpoints.

for i := 0; ; i++ {
x1 := rand.Intn(w)
y1 := rand.Intn(h)
x2 := rand.Intn(w)
y2 := rand.Intn(h)
clr := palette[rand.Intn(len(palette))]

bresenham.Bresenham(img1, x1, y1, x2, y2, clr)

if diff(img, img1) < diff(img, img2) {
copy(img2.Pix, img1.Pix)
} else {
copy(img1.Pix, img2.Pix)
}
if i%200 == 0 {
save(img2)
}
}

post originale pls

oh right now i get it
you can speed up the comparison if you decrease your search area to a rectangle in which the line makes up the diagonal
probably theres a even better way though but i cant think one up right now

Oh yeah, changing that now.

Also I think the image package is keeping the source jpeg in the YCbCr color space in memory, so it gets converted every time RGBA() gets called. That's another bottleneck.

dont bother with jpeg, make it only work with png

bow to me, faghots

lines are trash
manhattan distance top
euclidean distance bottom

Lines are trash. Even after 4200000 iterations you get spooky hollow eyes.

I just sped it up a billion times by only diffing along the line drawn, in parallel of course.

text isnt that much better
a bit maybe
rectangles wasnt that interesting either

don't try to recreate the image perfectly, just generate something that looks cool

Could we train a software on this way to return the process of abstraction. A software based on machine learning which would complete unclear pictures.

maybe

I think that's already how it works.

...

What would happen if you would source two different pictures, for example two different heads with the same size? I wanna see a merged face!

That looks amazing!

Would you post the code? I love it!

>pies vs dots
>not swastikas
baka

You should extend the process to include blurring or some other type of finishing flourish. What you have made here is pretty fantastic.

looks really nice, post code please

this shit reminds me of that PS plugin called fractalius

Also what's the easiest way to load an image into an array using C? I wrote something a while ago to create voronoi diagrams with random points but I'd like to use pics as a starting point.

...

What shit lang is this with the mantis shrimp eyes := syntax.
Fuck this.

pastebin.com/LH4fgck6

Try stb_image.h.

Did you randomize sphere size?

Looks like golang. := is the implicit assignment operator so you don't have to waste extra characters specifying the type of the variable.

string op = "faggot"

// identical to

op := "faggot"

yes also they start very large and get smaller over time.

for example, random between 1024,512px for about 50 iterations then get smaller and smaller to 4,8px

You mean, when using :=, it *assumes* what the type is by looking at the rvalue?

Go is strongly typed, so these assumptions are 100% correct. You try to do something incorrect with the return value (like shove it into a function that expects a string) and your program won't compile.

"a := b" is a short way of saying "var a = b", and "a := 0" is a short way of saying "var a "

It's a not-very-useful readability enhancement taken from Go's predecessor where variables were declared and assigned like this: "a: int = 0". Notice the colon.

Yes, e.g. with "shit := image.Point{123, 456}", the compiler knows the type of "shit" has to be an image.Point struct.

It's very useful because requiring the programmer to tell the computer things the computer already knows (and makes available by way of the AST and tooling) is a waste of time.

I see.

I suppose it is less to edit.

And less to screw up. Small things like that can really add up when your language is going to be used in projects of all sizes by God-only-knows how many people.

See I would do this but I already know you did it in python whereas I'd be doing it in C and nobody would appreciate it because everybody is a python 'programmer'

>if it's more like the source image
"more like" in what sense?

I often get frustrated by the strictness of the Go compiler, but that strictness has probably saved me a lot of hours by making some common bugs impossible unless you deliberately put them there.

Perceptually closer. You should be able to solve this.

>Perceptually closer.

That doesn't mean anything. What does "perceptually closer" mean to a computer?

this but not as a challenge you should always use ed its hte best editor

>I often get frustrated by the strictness of the Go compiler, but that strictness has probably saved me a lot of hours by making some common bugs impossible unless you deliberately put them there.
How does it compare to the strictness of C?
>Perceptually closer.
... E.g., closer in RGB values of every pixel at a given (x, y) position. Or less granular averages or whatever for speed prawbz.

I'd give my left testicle if Go would just friggin' support generics already. Right now, it's basically Java with all its bureaucracy with sightly better tooling and less XML.

The constant need of boilerplate and the "if err :=" pattern drives me batty.