Averaging pictures

The image is an average of 500 pictures taken free standing with a cell phone.
The corrected average are the images aligned and then averaged. The resulting picture is like a long exposure.

In the graph one can see the movements my hand did in pixels and the angles (the angles could be calculated in a preciser way although it's really resource intensive since is an iterative solution)

The image registering is done with a cross-correlation algorithm. The process is time expensive but I'm trying to make it better.

For who is interested in the code:
github.com/Pella86/DenoiseAverage

Here is a video of the images taken:
i.imgur.com/GLlpoXK.mp4

What you think are possible applications of this technique?
Do you think light painting will work?

I tried already noise reduction from low exposed pictures, noise and quality improvement of digitally zoomed images

Other urls found in this thread:

i.imgur.com/FqOAluY.mp4
nya.is/
doko.moe/
avisynth.nl/index.php/Median),
twitter.com/NSFWRedditImage

Long exposure of a busy road

You create a thread on because they'll think it's ghosts

One of the original pictures of:

This is really cool. Bookmarked the repo.

holy fuck OP do you have parkinson's

Can you use this to get a high quality pic from a lot of pictures taken with a shitty camera?

They'll love this then.

i.imgur.com/FqOAluY.mp4

I'm not Lucky Luke yes...

The base quality of the camera can't be overcome, but can take away the noise of a very dark image.

Left single picture / right many averaged

Zoom enhancement corrected.

Really cool.

I wish more people would post about interesting projects rather than shitpost about how intel is finished and dead or whatever the fuck.

fuckn spoopy

Single image

Thanks man! But a Intel post makes 300 replies in 5 minutes!

So pretty much just stacking + some new image corrections baked in. The corrections need optimizing because the final image's resolution is too low. Photoshop/Microsoft ICE stacking/stitching modes are way higher quality. Still a neat project I guess.

Pretty cool, OP. I guess it could be used alongside other image stabilization techniques - if it isn't already.

Actually, can you upload the set of images so I can try with Microsoft ICE? ICE does stacking and stitching with corrections already. Or just do it yourself.

This seem like a very expensive process, but the results are pretty sharp compared to the source.
But why not just model the noise and remove it like everyone else does?
You must be able to get good results with fewer images and the ghosting is less obvious.

Really well done OP! Over what timespan do you take those 500 pictures?

It reminds me a bit of manual HDR, but with movement correction. Would be very interested to see how something like 500 pictures taken over the span of a whole day look.
I'm going to read your code today, never got into image analysis etc. Thank you!

>Photoshop/Microsoft ICE stacking/stitching
do they do the same thing as OP? I haven't seen it in photoshop, and don't know about MS. can you post specific info?

> Photoshop/Microsoft ICE stacking/stitching modes
Yes I'm aware there are better/faster image registration process. But I wanted to implement the cross correlation, and desu I'm just happy it works. I have no idea what method ICE or Photoshop use.

> it could be used alongside other image stabilization techniques
It could, but I prefer to stick with cross correlations.

>I can try with Microsoft ICE? ICE does stacking and stitching with corrections already. Or just do it yourself.

I can upload the datasets, but they are about 100-500 MB, and where can I try?

>But why not just model the noise and remove it like everyone else does?
Because is not only for de-noising but long exposure simulation.

>Over what timespan do you take those 500 pictures?
It depends from the dataset and by GonnyCam an awesome camera application for burst photography, let's you take pictures with different time intervals but continously. They range from 0ms to 500 ms.

Cool, OP.

desu*

is a while I'm not on Sup Forums LOL

>What you think are possible applications of this technique?
None.
>Do you think light painting will work?
No.

Modern smartphones routinely do this, via pixel binning, though they don't need to compare completed saved jpegs, its handled in the image DSP before its fully processed. Conceptually its a fantastic technique as you can remove the noise pattern and other aberrant characteristics of the sensor itself. Potentially you have the ability to punch up a class in terms of optics. The sensor constrained by limited photons per pixel stops being a practical issue when you have multiple exposures of the same frame.

tony?

That looks very nice. How many pics did you use to get that result?

+Looks like it took about 20 minutes :^)

nya.is/
doko.moe/
MEGA

The technique can let you produce comparatively very high resolution images from a very small/lack luster sensor, and it greatly enhances the low light capability of any camera you're using. It has enormous potential benefits if it was harnessed in the right way.

A low budget video surveillance camera could begin snapping frames at a higher frame rate when motion is detected, so it could produce a very high res image of a person who moves into the field of vision.
The only issue OP runs into is that this is resource intensive, so its best handled by a dedicated DSP.

>Modern smartphones routinely do this, via pixel binning
They do, but you don't have any alignments of the bins.
Btw I think you confuse the binning it's done at the CCD level when extracting the current with the image binning which is to reduce the size of the matrix of pixels of the image averaging them.

The first one is how the signal is collected, while the second reduces the quality of the image.

bullshit: how aren't the clock or the lady in the front moving in the corrected average?

why everything in the left background looks so blurry compared to the right background? it doesn't makes any sense.

> How many pics did you use to get that result?
500 over 5 minutes, but there's a small trick, I used a stationary phone, so virtually no corrections are involved.

The corrections for now are very crude and the image transformation lack the interpolation, making them less precise (move for example can move only by 1 px and if the image needs to be corrected by 0.5 is either 0 or 1)

good guess!

>What you think are possible applications of this technique?

You can take cool pics of your own flopping dick lmao

>very high res image of a person who moves into the field of vision.
You are limited by the camera resolution I think, or close by... since you can't really determine the shifts more precisely.

The most computationally expensive dick pic ever sent.

Little glitch I don't know how to fix

Without looking at the implementation it kind of looks like some of the byte values are overflowing. Capping them at 255 is always a good idea. (could be something else just chiming in)

I thought about it, the matplot lib save accepts a 0..1 normalized rgb channel picture, and that should be normalized

That is an interesting artifact though.

Take a look at it for a bit longer though. You are getting 0 values where the image should have been close to 255. On the white spots of the rocks and in the waterfall.

>u are getting 0 values where the image should have been close to 255. On the white spots of the rocks and in the waterfall.
I also thought is a strange artifact, and thought about an overflow... in the gray value process doesn't do it...

This could be used for producing high quality stills from videos.
Sometimes for dark scenes it can be very difficult to see an detail in a scene so I'll instead record a video. With this technology I could make the preferable still from said video.
Like many technologies, this could better the results of low-cost equipment, such as for photographers with crud cameras which struggle in low-light.
It'll blow up on instagram.
Could it be combined with hand-held mobile-phone stabilisers such as the FlowMotion ONE?

Can anyone thing of ways in which this could better photography of moving subject?

OpenCL has built in grid based processing, use it

Damn, this needs to be integrated with smartphone cameras NOW

weird was just thinking about this
would using some photoshop method to merge layers to the same, since only the picture is the same, not the noise patterns

This

Damn. Are there any camera apps that utilize this technique?

Cool but I don't think it's that amazing.

Only works for still scenes which will usually be outdoor which will usually be in the day when you have no noise

I work with a micro-CT; frame averaging is the best way besides increasing your x ray tube voltage to decrease the total poissson noise of you scan and thus get a better S/N ratio

Which is very useful when scanning at low kV/low uA settings at very high geometric magnification, where you're encountering at least one Compton scatter or bremstrahhlung photon for every 30 transmitted signal photon (as in scanning low atomic number metals alongside tissue)
As you've demonstrated here, you can get higher digital zoom in photographs; in the CT, frame averaging can give you a smaller effective voxel size via a better signal/noise ratio at the detector. This comes at the cost of longer scan times more prone to doubling by movement and also a corresponding multiple increase of radiation dose relative to how many frames you average.

very impressive, does this work with pictures taken more closely together, like frames from a video?

there are tools to do average-based denoising with video, such as Median (avisynth.nl/index.php/Median), which is for averaging multiple copies of a clip (good for analog sources where each playback is a little different), and QTGMC, which is an advanced deinterlacer which works by interpolating missing fields using motion, and can also temporally denoise any type of video

I'm not takling about denoising video, i'm talking about using frames from a noised still video to make a denoised image like OP did, instead of taking separate pictures

a sequence of images like op's is no different technically than a video where the camera stays still, just average all the frames (such as with median)

-- well, it's a bit different to op's in that op's also accounts for minor camera shake, like when hand-held, which median doesn't account for

>Ramiel appear
Fuck this shit Im out of here

> high quality stills from videos
Yes! I will try to implement openCV soon enough.

>hand-held mobile-phone stabilisers such as the FlowMotion ONE
I think so, but I won't do it, since I want to stick with cross correlation, the next step would be to use a fourier mellin representation.

>Sometimes for dark scenes it can be very difficult to see an detail
This picture was generated from darkness!

i remember some documentary on ID about a bombing incident where the only video evidence was short blurry camcorder footage, then nasa used this averaging technique to enhance the footage using individual frames and solve the case. bery interesting thanks for reminding me

> OpenCL has built in grid based processing, use it
You mean for the image interpolations?

>Damn, this needs to be integrated with smartphone cameras NOW
Next step, I'll rewrite the app in C++ to squeeze out performance.

Photoshop has a merge and align feature

> I work with a micro-CT
During my PhD I used this technique to de-noise electron microscopy pictures/volumes
Your image looks cool! What is it? What kind of image registration you use?

Yes the technique is a trade off, I will try to test it around what can be done!

> does this work with pictures taken more closely together
It should, I will try to run it for videos soon enough.

>such as Median (...) QTGMC
thanks for the suggestion


In the image only shifts vs rotation corrected images.

Better go median than mean to remove ghosting. And its not something new.

Also you can use ImageJ. It has lots of such stacking modes for you to experiment. But it leaks memory like hell (Java) so I doubt you can process more than 100 10mp pictures with 16 Gigs of RAM. I used to use it to enhance my telescope images.

YOU FUCKING BURNT IT

what happens if you run it on frames from going through a tunnel? Or general 3D transformation of image capture location?

>Better go median than mean to remove ghosting.
I'm trying to implement the median, but since I need the whole pixel stack it gives me a memory error, if you know any good algorithm to get the median without having the whole column of pixels I'd be glad to implement it, I'm also trying to implement a mode averaging method.

>And its not something new.
Never claimed that is new, I just wanted to program it.

>Also you can use ImageJ.
I could use GIMP or Photoshop too, but I wanted to implement something that uses Fourier transforms and interact with it.

>from going through a tunnel?
I'm not sure what you mean

>Or general 3D transformation of image capture location?
I will try to implement all the affine transformations once I can use the Fourier Mellin representation (sheer, scale, rotation, movment)

I would try taking pictures of the ocean / beach at night.

No tree leaves moving. Water and sky should make nice dreamy look. Dark sky and beach should come out well-exposed without much noise.

>Water and sky should make nice dreamy look.
I took this on the Limmat in Zurich.

I love image processing. Wish I was better at math to understand this stuff.

Can you remove the humans? That would be handy for tourists

from what I understand is you are using frames that can be from video potentially, so what happens if you video going through a tunnel and attempt to aline the frames. My guess is any consistent features will be reinforced and features specific to one or few locations will be erased/faint. I'm curious how it may fail when presented with images which are sometimes similar (consistent structure of tunnel) but moving location/perspective

Or say frames of rotating around an object where the object stays relatively centered and similar distance etc.

that could be done quite easily with a little manual manipulation (just mix and match parts of images that don't have people in the way at that moment)

perhaps even doable automatically, with something like taking the colour which is most often seen, within a small range to account for noise and minor lighting changes

Yes you got it right, moving but repetitive structures disturb the cross correlation!

The program does exatly that
Sequence of images > math > resulting picture

See
That was a busy street and the moving things are removed.

Im trying to implement a median and a mode averaging (for now i use the mean) but they are memory intensive methods

make sure all your data types are uint8 scaled to 0,255 before it displays

interesting, OP. thanks

One last bump

that's cool

So you're just aligning and stacking images?

I mean people have been doing this for years in photography. I do it when imagine with a telescope.

cool, but its nothing new.

What kind of math skills do you need to get into stuff like this?

Thats a neat effect, the water looks like smooth ice.

Holy shit, a good thread? On my Sup Forums?

Am I supposed to get these errors while trying to run the script user kun

>poissson

kek

>I mean people have been doing this for years in photography. I do it when imagine with a telescope.
Yes, the technique is used in microscopy and astronomy, what image registration method do you use for the alignment? Again never claimed I did something new, competitive or efficient.

> cool, but its nothing new.
That doesn't even align the stack, and I wonder how the median is implemented so I can copy it in my program.

> What kind of math skills do you need to get into stuff like this?
A first year university "math for biologist" rest I self learned.

> Thats a neat effect, the water looks like smooth ice.
True!

> Am I supposed to get these errors while trying to run the script user kun
You are probably running python 2.7, my script requires 3.6. In case you are running py3, and you are using linux, substitute perf_counter() with clock()
Thanks a lot for trying.

Can you create a 3D envoirement?

What's the lowest amount of pictures that this will work with?

I can see this being used for the film industry.

Sell it as a Adobe plug in for like $10,000

>Can you create a 3D envoirement?
Like a 3D object stack? Well is possible to create but would be computationally extensive, and sincerly making the stack 'transparent' to be able to see it in 3D is... in my opinion not so useful. A 4D representationd would be better
x, y, color, stack_number.

>What's the lowest amount of pictures that this will work with?
I'm still testing, for now I take a range of 50 to 500 pictures.

>I can see this being used for the film industry.
I think is already extensively used

>Sell it as a Adobe plug in for like $10,000
Lel, I think Photoshop has already this function.

original

if you used a 360 camera and did an array of images can you create 3dimensional image you can move around?

Is that the zurichhb?

What's "CC alignment"?

>if you used a 360 camera
If you mean a camera that produces this kind of pictures? Then yes, (only disvantage is that I can work only on square picture for now... because, well technicalcities and my very bad skill in math)

>Is that the zurichhb?
Precisely

>What's "CC alignment"?
Cross correlation based alignment. the image registration technique I use.

10/10
would use as a wallpaper if dimensions were right

You may want to use a Kalman filter instead.

> You may want to use a Kalman filter instead
I think my math knowledge starts and stops with the good old Fourier... I don't think I'd be able to implement a robust statistical analysis.

I've installed python 3.6 and the required libraries. But I guess it won't work without the datasets. Are you going to share them (´;ω;`)

Oh the procedure should automatically create a dasatet


line 392 in main_avg.py

if __name__ == "__main__":
print("START AVERAGING SCRIPT")

folder = "../../silentcam/rgbtestdataset/"