Write a program in your language of choice that does the following:

Write a program in your language of choice that does the following:

- load an image (source image)
- get a list of all the colors in the source image
- create 2 blank images (image1 and image2)
- draw a random polygon or circle on image1 using a random color from source image
- compare image1 to the source image
- if it's closer in color to the source image than image2, copy image1 to image2; if not, copy image2 to image1 and continue drawing more random shapes and comparing
- post the results and bits of code

Other urls found in this thread:

pastebin.com/1T7c0Bkk
ghostbin.com/paste/5sz2j
gitgud.io/sachiko/g-chan-random-lines/
0x0.st/Cit.png
github.com/nilesr/dotter
en.wikibooks.org/wiki/FFMPEG_An_Intermediate_Guide/image_sequence
my.mixtape.moe/ewpysw.gif
github.com/fogleman/primitive
twitter.com/AnonBabble

I don't follow orders from an adult posting cartoons with sexual content.

>- post the results and bits of code
You would like that huh

Sure thing. A few bits of my code:
010010100101
Now fuck off.

Refer to
which was a good thread.

Forgot my image.

Do your own homework, faggot.

Here and in the last thread I saw at the end more lines drawn then circles.
How do you decide the start and endpoints of the lines? Fixed length? Just pick random position and random angle?

pastebin.com/1T7c0Bkk

Stop being a faggot, do the task, and post your results.

Pick two random endpoints from (0,0) to (imgw,imgh), and draw the line segment from one to the other.

using lines, it might be interesting to try to subdivide the image to run the algorithm in parallel

Yeah sure ofc, but like, do you just pick two random points? I feel it would be better to limit the endpoint into a certain range around the start point, and maybe even have it a fixed length away from it

push it onto the gpu!

It is better to limit the length, yeah.

I'm going to sleep but, like my penis, my computer is staying up.

Post an image and effect (lines, dots, pies, shake, text[and word])

I'm leaving in 5 seconds

Okay, you nerds missed out!

Later, idiots!

one million iterations
thought of some performance improvements
instead of constantly copying image2 to image1 i only use source image and image1
i only draw the line if the addition (the line and its intended color) is a closer match to the source compared to the current
saves a lot of computation to not have to copy over image2 to image1 if the line isnt good
it still takes me 20 minutes to run 1,000,000 iterations because my computer is slow as shit
will look into threading the iteration of the two lines (on source and image1), but never really done threading before, any tips? (its C)

Cool homework assignment

another one

You need a minimal line length/shape size or you will be basically plotting pixels.

i'm limiting the maximum length, ill try limiting minimum as well in a minute

half a million inters.
line 1 pixel thick 30 pixel long.
Random color and angle.

>if the addition (the line and its intended color) is a closer match to the source compared to the current
How you check that? Comparing the 2 pixels defining the line or the whole line?

i iterate over the each pixel in the entire line (x1,y1)->(x2,y2) and calculate the euclidean distance
if the distance is lower between source and the random color i draw the line

How do you compare the images? Colors of the pixel of the drawn line?

Same with a million iters.
No much improvement.
Fixed draw geometry does produce diminishing returns.

"Distance" as in disparity of color values, right, user?
Also, I didn't know you could literally compare the pixels of the drawn shape exclusively; I thought it had to be a box encompassing the mutated pixels, which is much less efficient.

As it was suggested yesterday a square around the part painted that changed.

The euclidean distance between what?

you can if you do the math, a line is just y=kx so if you have a 1px wide line you can loop through x and multiply your y
with "distance" i mean the similarity of the color vectors
d += sqrt((refColor[0] - cmpColor[0]) * (refColor[0] - cmpColor[0])
+ (refColor[1] - cmpColor[1]) * (refColor[1] - cmpColor[1])
+ (refColor[2] - cmpColor[2]) * (refColor[2] - cmpColor[2]));

>Also, I didn't know you could literally compare the pixels of the drawn shape exclusively; I thought it had to be a box encompassing the mutated pixels, which is much less efficient.
sorry forgot to properly answer this
the idea is that the line will have a constant color, therefore its unneccesary to actually draw it before comparing with the source, use y=kx to iterate through the line exclusively in the source and just compare the colors from source to the fixed randomly picked color

The problem with this method is that it only works with lines 1 pixel thick unless there is a way to extract a mapping of the affected pixels from a library like PIL so as to make the comparison child-play.

Thank you for the elaboration, user.

Same but 2 pixel thick.

as long as there no antialiasing is involved, you can always use a bitmask for complex shapes.

you can still iterate through the pixels if the width is known with two netsled for-loops if you know your triangles (t. terry)
calculate slope of the line and know your line width, from there its simple trigonometry to calculate the range of pixels to go through for each step in x

good idea, ill do this if i ever implement random polygons

...

im a absolute retard
closing shitstains on the computer speeds things up
i do 10m iterations in under two minutes now
here is my code if anyone is interested
ghostbin.com/paste/5sz2j

neat-o

...

>you can if you do the math, a line is just y=kx so if you have a 1px wide line you can loop through x and multiply your y

From both coords that span the line, you can calculate the slope. From one coord and the slope, you can calculate b in y = mx + b. Fine and dandy. How does this translate to pixels?
For instance, for the line y = (15/11)x - (2017/11), (155, 28) is one starting coordinate that represents its first pixel. But a next coordinate is (157, 30.72727...). Where would this pixel be? Would the y be truncated to 30 and so it would be at (157, 30)?

It either does this or it rounds, but if you assume the line drawing function rounds when it truncates and vice versa, you could be checking the wrong pixels from the ones actually drawn.

10m iterations
anti-aliasing on left, no aa on right
i'd do rounding conversion so you get the closest pixel and just go with it
(long) (x+0.5)

But user, how do you know that that's what your line drawing function does? You'd have to follow its protocol every time.

And, to be clear, I assume the line drawing function is not your own but OpenCV's or stb's or whatever graphics lib you're using.
Otherwise, this is moot.

alright, i'd assume the line drawing would draw on both close pixels, read the source if your autsitic enough, or cast to int directly and use x and x+1

Left looks like a sketch, that's pretty cool. How long did it take?

around 2 minutes
ill take a request or two if you want

Better, you should put your code on github, post a link here, and then add that github to your resume.

gitgud.io/sachiko/g-chan-random-lines/
probably wont put it on my resume, i like being a neet

I like how the clouds look in this so much

Nice.
inb4 "how do i install opencv" replies

>write a completely useless program to achieve obvious results

It's okay user, we understand if you don't know how to write simple programs. No need to lash out.

Can you do this please?

2B? More like 2 cute feet :3

that took a while longer to run 10m iterations on, probably would look better if i limit max line length even more and do more iterations

Well worth the wait. Can anyone do anything for you in return?

Thanks

np
this is the line length halved and 25m iterations, took 546s

ill leave now, sayonara Sup Forums-chan~

At roughly the same time I posted this, I started a run of my implementation in python.
It's not very fast, and some of the colours are off due to me using a simplified difference function, but I like it.

mind sharing the code?

>copy image1 to image2; if not, copy image2 to image1
Not getting this bit, like a layer on top of another?

Don't do it. Some Pajeet with use it in their website claiming it as their own and charging women 500 Rupees to have a Robot paint their photos.

two separate files
if adding whatever made the image closer to your target, keep the changes and store the copy, else overwrite it and try again

It's utter crap, so I'd rather not. It's not that hard to write your own if you want to.
I did it using python and pygame, only copying and comparing the areas that a circle takes up. It didn't use a list of the colours in the picture, instead picking completely random colours when needed. The minimum radius was 5 and the max was 15. That is all you need to know if you want to replicate it.

No. Like discarding the one that looks less alike the source image and making a copy of the one that looks more alike the source so as to start the process again but with a image that resembles a bit more the original. This is the key of the whole thing. Without it the program will never produce an image similar to the source.

I get it now. Will get to it soon enough.

Around 2M iters.
I think the intricate background is fucking it up on this one.

Is this C?

...

This

I'm so glad I stubled upon this thread yesterday.
I'm a beginning python programmer and never had even tried things like PIL. My first try (after around 5 hours of fighting with it) had around 7 iterations per second but now I menaged to get it up to around 30k.
I'm so proud, thanks Sup Forums

And after 18 million

I really wanna write this in C for the performence but I only ever learned the basics, and afaik C is pretty barebone. What do you all use for reading in the image, drawing the line and outputting it to a file again?

Also what do you use as image format? .bmp?

How y'all making those gifs/webms? Piping the output through ffmpeg?

convert your images to ppm before passing them to C and convert them back to e.g png afterwards

read up on ppm, it will be one of the easiest image formats you ever wrote a loader for.

Why not modify it to determine optimal line lenth dependent of image size?

You could probably sell this as a fag-book feature, so that hipster man-children can think they're an artist.

Why only two images at a time?

What's an acceptable time for 1M iterations?

Mine does 1 million iterations of 5px dots on a 743 x 1075 image in 2 seconds, but it's written in C and I'm compiling with -Ofast. It does 10 million iterations on a 7680 x 4320 image in 31 seconds. I'd attach it but it's 23mb, it's here 0x0.st/Cit.png

I use stb_image.h, a pretty well known library. github.com/nilesr/dotter

One to hold the desirable changes, the other to hold the first one plus some change

*telporates insied you*

golang, but close enough

en.wikibooks.org/wiki/FFMPEG_An_Intermediate_Guide/image_sequence

my.mixtape.moe/ewpysw.gif

github.com/fogleman/primitive

Now fuck off.

have the program output a pic every now and then with sequential filenames such 0000.jpg,0001.jpg into a directory. Then run
[Code]
ffmpeg -i %04d.jpg -c:v vp8 -b:v 20M -fs 2900K output.webm
[\Code]

10 million

What sort of tricks do you use for speeding up the program?

I'm completely new to programming and I've been watching these threads since yesterday.
I'm completely mesmerized.
What would I have to learn or look up to learn how to do this myself? What language would you recommend?

I really wanna try to do this.

between this thread and the previous one there are examples in Python, Go and C++ so you can copy those and study what they do and modify. Probably Python is the easier option but also the resulting program will be the slower.

Thanks for trying the x's! Didn't look as good as I'd hoped. Ahh well.

Instead of using Draw.line from PIL and then checking which pixels have been changed, calculating if it was good or not and making actions based on that, I'm using custom function to calculate pixels used for drawing the line first, then checking if it's a good choice to draw there and then, if it is, pushing the pixel data to buffer. (Not drawing anything on canvas, jsut pushing raw pixel data as dictionary entries with coordinates to buffer).
The image is being actually drawn ony when saved, it's saves a lot of computation power, I'm drawing each pixel individually then according to the data collected.

I'm new to programming too.

I didn't make a true program to do this though. I wrote it in python with just PIL but that's retarded and I had to cheat to get it to work(not completely randomly brute force but placing relative to existing image data). A much smarter idea would be to use some graphics library in any language (C, python, C++, etc..) so you can make use of parallelism and graphics processing then you can accomplish the millions of iterations that people in this thread are doing.

I wish we learned about pyglet for graphics processing in my intro programming class instead of fucking turtle graphics.

As was mentioned, instead of always compering the whole image, compare only the selected zone you chosen with the original.

This you limit the number of computations and only calculate differences.

I can cut about half the time of comparison out with this but beyond that it affects later accuracy

neat, 250 million iterations on this

i have the maximum length set to
linelength = min(img.rows, img.cols) / 12;