/wpc/ - Weekly Programming Challenge

Challenge 1: (Draw a picture using random shapes)
Challenge 2: (Conway's Game of Life)
Challenge 3:

Given a source image and a number, create and output to user a palette with this many colors in it. The requirement is if you would use that palette to draw the picture anew, the result should look good. I don't think I can pose the problem in a more strict way than that.

As an example here is the palette created by my program, with 5 colors.

A more difficult version of the problem would be to have the program automatically detect the needed number of colors.

Other urls found in this thread:

opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_ml/py_kmeans/py_kmeans_understanding/py_kmeans_understanding.html#kmeans-clustering-understanding
ncbi.nlm.nih.gov/pmc/articles/PMC4270274/
github.com/AUTOMATIC1111/randdraw
google.com/search?rlz=1C1GGRV_enUS751US751&q=color quantization papers&oq=color quantization papers
ncbi.nlm.nih.gov/pubmed/11305594
twitter.com/SFWRedditImages

>all that soyboy food

And people wonder why nowadays males are effeminate?
I mean, that picture could be the diet of some MtF transitioning guy and I'd believe it with no doubts.

That is a very interesting observation, user.

Here's another picture. Here my program fails to add blue to the palette, so I still have some work cut out for me.

Is that you? I'm interested in getting a girlfriend and you might be high on your luck today. Post email.

You might be also, as I decided to post another picture for you. It's obvious that there are more than 5 colors needed for this one, so it seems the program chose to skip red and its shades.

Don't be shy, even if you are a girl(male) I don't mind that much as long as you can pull a convincing look.

So I changed up the algorithm a bit, before it mattered a lot how many pixels there were of a particular color for that color's representation in palette; now, less. I changed it from linear to fourth degree root.

Picture is also, me, by the way.

God, I hope people who do post in wpc do wake up before the thread is ded.

Great. Forgot my picture yet again.

So, you're interested in these computer thingies I see. I can let you handle my USB dongle, if you know what I mean.

I'd love for you to contribute some pull requests to my repository, if you catch my drift.

ill make an attempt eventually

first two wpc were more explicit in terms of implementation. might turn people away if they have nfi where to begin

Just after I'm done injecting my vulnerability into your database.

I'm doing this with a very straightforward k-means. I didn't really want to post a step-by-step instruction because I thought people could come with original ways to do this. Shall I?

not really sure what the best move is. maybe just see how this one plays out and if it dies change up the approach

is there better solution than kde?

k-means clustering*

I have no idea what kde is in this context.

Ah. No idea. I dislike k-means because have to specify the number of clusters. The best solution would find the number the way human can looking at the picture. That's the challenge.

Heh. This is the picture drawn using extracted colors. This needs work.

Bump

I've thought about the problem for a while and tried a few mockup solutions but there doesn't seem to be an obvious way to do it in a way that really reflects the way we see images.

Binning of the hues seems like it would be the most appropriate (euclidean norm on RGB space doesn't seem like it really reflects how we see palettes), though exactly how to determine the binning intervals seems nontrivial. Local maxima seem to solve some of the issues but certainly not all. And there are always some edge cases that seem to ruin everything. Another issue is if you have two similar objects of similar but different colors. Whereas if it's one object you would like to group in as many pixels as possible into the same color, if it's two different objects spatially separated we would like two colors instead of one. Considering only the color space makes this impossible.

You could try adding x and y coordinates of the pixel to clustering. So instead of 3 dimensional k-means you'd have 5-dimensional.

Only because I recently installed imagemagick to do this for muh colorscheme ricing automation thing.

convert ~/Desktop/current_bg_img -colors 16 -depth 8 -unique-colors -scale 1000% - | feh -

>tfw some breakthrough in computer vision happens in one of these threads

we're making the botnet stronger

>Another issue is if you have two similar objects of similar but different colors. Whereas if it's one object you would like to group in as many pixels as possible into the same color, if it's two different objects spatially separated we would like two colors instead of one.
Care to post a picture like that?

now this is the type of content I like to see on Sup Forums

Posting my first results of the one from a week or 2 ago, I only just got around to it.
I might try this new one before trying to improve on this, I wasn't really going to bother with the game of life one unless I was bored.

>original
>kmeans on just RGB
>5-dimensional kmeans with RGB + pixel coordinates
Not sure if I like the last result. Operating on just color definitely gives a nicer palette. Naively adding the coordinates doesn't seem to work very well.

Is your original lena sepia?

Made a script that creates a montage like this, showing 3, 4, 5 and 6 color palettes.

Some more. This picture's beautiful.

import numpy as np
import cv2

img = cv2.imread('papika.jpg')
Z = img.reshape((-1, 3))

# convert to np.float32
Z = np.float32(Z)

# define criteria, number of clusters(K) and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 5
ret, label, center = cv2.kmeans(Z, K, None, criteria, 10,
cv2.KMEANS_RANDOM_CENTERS)

center = np.uint8(center)

# redraw the image with pallete you made:
# res = center[label.flatten()]
# res2 = res.reshape((img.shape))

display_img = cv2.copyMakeBorder(
img,
0,
100,
0,
0,
borderType=cv2.BORDER_CONSTANT,
value=(255, 255, 255, 1))

for i, color in enumerate(center.tolist()):
r, g, b = color
color = (r, g, b)

OFFSET_X = 20
OFFSET_Y = 50
start_x = i * OFFSET_X
start_y = display_img.shape[0]
cv2.rectangle(
display_img, (start_x, start_y), (start_x + OFFSET_X - 1, start_y - OFFSET_Y),
color,
thickness=-1)
cv2.imshow('display_img', display_img)
cv2.waitKey(0)
cv2.destroyAllWindows()


this is a pretty shameless rip from opencv docs. im not really interested in ever reimplementing k-means by hand ever again after doing it in college. this wpc is a good intro to using opencv and machine learning

since a lot of you like to do this kind of stuff all from scratch in C, you might want to start at

opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_ml/py_kmeans/py_kmeans_understanding/py_kmeans_understanding.html#kmeans-clustering-understanding

which explains the implementation pretty well

...

Hm. My result differ. I do agree though that adding x and y does not seem practical.

No coordinates k-means.

Coordinates included k-means.

ehhhh, it is pretty much trivial to implement. Takes a page and a half of code, and i'm doing it in raw C++.

I wonder if you're scaling your color values to 0-1, but keeping coordinates as is for k-means?

Of course.
Unfortunately, the source picture is less saturated than 'the' lena pic, and for whatever reason the way I'm doing it seems to reduce the saturation compared to the originals, so it's compounded. There's still a ton of issues with my code in general though.

Hmm, no. I must have made a mistake somewhere. I can't be bothered to fix it though since it doesn't seem to solve any issues. I wonder if k-means on hsv instead of rgb would produce a difference. Have you tried hsv?

>apply a decent edge detection algorithm to the image
>scatter pairs of sample points along the edges
>constrain palette color choices so that each pair still retains two different colors after the image re-colored, thus preserving the important features of the image
Too bad I'm far too lazy to code it.

What kind of metric do you use for the image differences? It seems like that could make a big difference with regards to convergence speed both in space and color.

Enjoy your heart disease and early death, fatboy.

...

This is a color quantization problem, for those of you who were wondering.

Soybean doesn't alter your hormones m8. It's a common myth.

>that reference photo
So you read that imagej book too

Don't bother, hsv is bad for this since as v approaches 0, s and s lose meaning, and as s approaches 0, h loses meaning; using those kinds of values for k-means would produce horrible results.

LAB is the correct approach for this. Previously I did some tests with LAB, RGB and HSL, and LAB does not produce results that are noticeably better.

I use simple manhattan, that is abs(r1-r2)+abs(g1-g2)+abs(b1-b2).

N-no... It's just a popular choice for image work.

Sum of euclidean distance over the line.
I honestly haven't looked into the issue much. I'm not sure if the sum is even the proper way of doing that it was just the first thing that came to mind. I haven't messed around with it yet, just got it 'working'.

...

Enjoy your anxiety, mental disorders and behaving like a woman in his period in general, soyboy.

>if u dont eat like faggot, u must b fat xd

ncbi.nlm.nih.gov/pmc/articles/PMC4270274/

"Phytoestrogens are present in certain edible plants being most abundant in soy; they are structurally and functionally analogous to the estrogens. Phytoestrogens have been applied for compensation of hormone deficiency in the menopause."

No, thanks.

>if you don't eat like a terminal fatass, you must be gay xd

Using k-means on RGB. Thanks to for the info, and thanks again for the fun, OP.

It's not about the quantity of food you retarded, estreogen oozing cockmongler.

It's about WHAT food you eat.

Did challenge #1. Time spent ~2hours
Might still do some improvements
Result:
>4 billion iterations in 3 minutes
Completely random colors which i might fix to use colors from image.
Multithreaded as fuck.
Currently only does one pixel at a time. Next step is to add some shapes and shit such as lines, square, crosses, triangles, polygons etc.

>anxiety triggered by potential soy
>clear mental disorder (paranoid)
>irrationally lashing about about things he doesn't understand (like a woman)

really alters the hormones

Single pixels kind of just gives you a noise filter that takes 3 minutes to render.
Awaiting your line/polygon implementation though.

Paranoid? Scientific evidence is with me.

>u jus dun get it xD soy is beri gud xD
It's time for you to go back

You've only thought. Actually implement something rather than intellectually masturbating over the idea and coming up impotent.

Precisely. That greasy, sugar coated bbq you shove down your throat to feel like a man is literally killing you.

All that fried chicken is clogging your arteries and making you impotent before you hit 40.

But by all means, keep eating shit food and kys.

>KFC, or soy products
God. if ONLY there was some other option but alas, there is not.

What image library should I use for c ?

Didn't you hear him? He thought about it, and an edge case sounded like it was hard so what's even the point of implementing it if it doesn't work 100% of the time right away?

Are you literally retarded? I don't eat any of that crap you just pulled out of your ass.

Are you implying that if you don't eat like a literal anorexic HRT transitioning faggot it means you eat all that other shit? False dilemma you got there.

I do keto and paleo, you cockmongler, no sugary shit. I'm fit as fuck and 5% body fat.

Come at me sissy soyboy.

Yes that point is obvious that simple edge case.
>Right away
This is a programming challenge thread. Either program, instead of idling around.

There is a simple solution to this, but i'm writing up my own solution. Again mental masturbation just stalls any work from getting done.

So in our great tradition, you both can kys.

I was being sarcastic. I thought it was pretty obvious that I was trying to make his logic sound retarded (which it is).

I opted to use imagemagick's convert to turn images into BMP and then manually read/write that, as it's easy.

Here's the code if you want to take a look. C++, though, but since it's so short it should be very easy to adapt for C. github.com/AUTOMATIC1111/randdraw

>I thought it was pretty obvious
Fooled me also.

def k_means(img, k):
img = img.tolist()

centers = [(random.randint(0, len(img)), random.randint(0, len(img[0])))
for i in range(0, k)]
centers = [tuple(img[c[0]][c[1]]) for c in centers]

for i in range(0, 30):
print("iteration ", i)
clusters = dict()
for c in centers:
clusters[c] = []

for y in range(0, len(img)):
for x in range(0, len(img[0])):
shortest_dist = sys.maxsize
closest_center = None
for c in centers:
dist = distance(c, img[y][x])
if dist < shortest_dist:
closest_center = c
shortest_dist = dist

clusters[closest_center].append(img[y][x])

for ci in range(0, len(centers)):
a = average(clusters[centers[ci]])
centers[ci] = a

return centers


takes around 30 iterations to consistnely get good colors. opencv only takes 10. also its way faster

(reddit) C:\Users\2c\Desktop\wpc3>python color_pallete.py
4.575377408
^C
(reddit) C:\Users\2c\Desktop\wpc3>python color_pallete.py
0.26836608

Thank-you!

Where are you getting these challenges from OP?

Also what if you combined challenge 1 and 3? Have the sketch program use the palette outputted by this challenge to sketch the image. That way YOU can decide whether it "looks good"

>her breasts are augmented

Did a simple median cut with Floyd-Steinberg dithering.

Well, it's a well known problem, as anons posted before. Just stumbled upon it thinking what would be good for the next challenge. I'm not the same user who posted first and second.

>Also what if you combined challenge 1 and 3
Not really the best outcome. Challenge one profits a lot from dumping a lot of colors into the picture. I did add an option to limit the number of color to my program from challenge 1. Here:

>it's a well known problem

In what field is this well-known ? I've been programming for years and never encountered anything this hard before.

Optimized colors and ran on 2500k @ 4ghz
5billion iterations in 23min

Dude. You're still just really inefficiently copying pixels. Get working on the shapes/lines part.

btw the point is to generate something interesting, not something that looks exactly the same as the source, so use shapes, and play around with that

hello
sounds like a fun challenge
ill do it

Nice. Dithering makes a big difference.

This is without dithering.

Efficiency is everything

efficiency in doing what?

I wonder who is behind this post

google.com/search?rlz=1C1GGRV_enUS751US751&q=color quantization papers&oq=color quantization papers

i put mine together but i forgot opencv defaults to bgr (?)

In anything. I made the program to be easily changed to handle different shaped with only minor changes. While others made inefficient spaghetti I designed this to be as fast as possible and scale with cores. Might be able to do a full length movie in a reasonable quality and time.

proper version

it's more to just cp image1.jpg image2.jpg if you're just generating the exact same copy of the source image.

You're dumb and made me smile

There are multiple types of endogenous estrogens: estradiol, estriol and estrone. The estrogen receptors have different affinities for each of those hormones. Do you actually think a biological system with this specificity can be "fooled" by an exogenous estrogen "look-alike"? Please.

Even if that weren't the case: to have noticeable effects on health, you'd have to eat like 4 kilograms of soy per day. Nobody does that.

Nobody prescribes hormones for post-menopausal women anymore either. There's well-established evidence that shows it does more harm than good.

nah, you're dumb. your autism made you miss the point of the challenge completely, and you just made a retarded program that fires up all your cores to simply copy an image.

Holy fuck you are stupid

The point, though, I feel, is to capture colors from the picture into a palette that will look like it fits the picture. Colors optimal for dithering (because they combine well into desired colors) are not always optimal colors for a palette (which should have colors that look like they belong in the picture without combining them).

what are you even arguing about you idiot?

What are you? I'm doing exactly what the challenge implies except really well. You are just too dumb to understand

Where are your shapes man.

>Do you actually think a biological system with this specificity can be "fooled" by an exogenous estrogen "look-alike"? Please.

"Xenoestrogens are a type of xenohormone that imitates estrogen. They can be either synthetic or natural chemical compounds. Synthetic xenoestrogens are widely used industrial compounds, such as PCBs, BPA, and phthalates, which have estrogenic effects on a living organism even though they differ chemically from the estrogenic substances produced internally by the endocrine system of any organism."

>with this specificity can be "fooled"

"Xenoestrogens are clinically significant because they can mimic the effects of endogenous estrogen and thus have been implicated in precocious puberty and other disorders of the reproductive system.[1][2]"

>Even if that weren't the case: to have noticeable effects on health, you'd have to eat like 4 kilograms of soy per day.
>source, my nose

>Nobody prescribes hormones for post-menopausal women anymore either. There's well-established evidence that shows it does more harm than good.
Agreed

shlomo.jpg

>except really well

and I'm telling you that doing 100 billion iterations in 2 seconds is great BUT if the fucking result is a carbon copy of the source image it's a waste of time..

The *point* was to generate images that look cool/interesting. that doesn't mean it has to be slow, it just means you should maybe do 10 million instead of 500 million before the generated image becomes an exact copy of the source

Not really. You need to be able to balance efficiency and maintainability with deadlines and time budgets. If you can do both, good on you and your bigass salary. Most people can't though.

Please learn to read and stop embarrassing yourself

I do keto as well, but you're trashing decent food in a way that makes you sound like Hardee's customer of the month.

Also, how tf are you doing keto and paleo at the same time? Wouldn't a lot of paleo food break ketosis?

Keto has a carb limit/range in which you are safe, and I don't keto permanently.

>decent food

If you think fucking soy sauce and those weird looking brown pieces of bread are decent food, you should probably kill yourself after reading this.

>it's a "lets quote wikipedia" episode!

ncbi.nlm.nih.gov/pubmed/11305594

"The concentration required for maximal gene expression is much higher than expected from the binding affinities of the compounds, and the maximal activity induced by these compounds is about half the activity of 17 beta-estradiol."

So the concentration required to provoke maximum gene expression in target cells is very high, even higher than suggested by the actual receptor affinity for the compound. Not only that, this maximum is actually only half of what actual estradiol achieves.

>>source, my nose

Do the math. Go find out how much phytoestrogen is contained in soy (or whatever) and calculate the therapeutic dose of soy. Be sure to keep the above article in mind.

Unless you're literally eating nothing but this crap, and I mean ridiculous amounts every single day, you have nothing to worry about. This fearmongering is nothing but broscience /fit/ tells itself to feel superior to other people.

>muh enterprise grade coding challenge

If it's so easy to change your code to use shapes instead why haven't you done it yet?