Apple iphone camera face mask thing

How was this done in regards to programming?

youtu.be/WYYvHb03Eog?t=146

The face mask thing. Like how the fuck did they do that?
>inb4 applefag
I'm too poor to afford an iphone and i probably wouldn't buy one anyway

What programming language was that made in? and how do you do it anyway? Are there any known algorithms or concepts for that?

Other urls found in this thread:

courses.engr.illinois.edu/cs498dh/fa2011/lectures/Lecture 25 - How the Kinect Works - CP Fall 2011.pdf
microsoft.com/en-us/research/publication/combining-multiple-depth-cameras-and-projectors-for-interactions-on-above-and-between-surfaces/
twitter.com/SFWRedditImages

I'm too distracted but that guy's thumb nail. Fingernail clippers exist for a reason you monster.

Brilliant engineering work you have come to expect from Apple.

Yes but how? I want to know how it was done

>Are there any known algorithms or concepts for that?
If they performed that, then, there are.

it's handy for coke fiends

computer vision these days is pretty good and fairly simple to implement.

Not worth the money though.

Image recognition tech has been out there for years. Apple threw papers at pajeets and chinks and some of them managed to shit a fast enough program to put in a phone

it's basically a miniaturized Kinect crammed into a smartphone
courses.engr.illinois.edu/cs498dh/fa2011/lectures/Lecture 25 - How the Kinect Works - CP Fall 2011.pdf
microsoft.com/en-us/research/publication/combining-multiple-depth-cameras-and-projectors-for-interactions-on-above-and-between-surfaces/

Are you blind retard? You saw wireframe model for a moment. It makes rough model of your head, then applies decorations and textures to that model, then applies rendered texture to your photo. They haven't invented anything, but, i guess, they were first to apply it to camera software.

snapchat has had these for over a year

That's Snapchat. It's on literally any relevant phone.

Gay man

Well, i don't use (((snapchat)))

There is a simple idea behind.
Computer vision uses the projected shadow and difference of tone to distinguish a face.
Basically you split a picture into multiple parts, assign a convoluted (median) value to each part from its tone, then you look for pre-determined to locate a face.
Once it's done, you can measure its dimension and adjust the mask on the face.

There's no magic and it isn't a revolution.

why are you so hostile?

(…) look for pre-determined *patterns*
I forgot a word

Because fuck you, that's why.

>in regards to programming
the way you phrased your question suggests you are about 5 years from understanding how this works, at best. go read up on data structures & algorythms, calculus, computer vision, machine learning - just to understand the very basics. from there you'll probably have a few more years of more specific studying.

The face unlock looks probably faster than Samsung's version but it'll be a killer for most people who are used to TouchID.

Which would you prefer? Pulling your phone out of your pocket with your thumb on the home button to have it unlocked before you even see the screen, or pulling your phone out of your pocket, have to thumb the power button to wake it up, look at the camera and then swipe away the lock screen now that it is unlocked?

For Apple Pay, would you rather double press the power button and then stick your thumb on the home button without having to bring the device up to eye level or double press, bring it up to eye level and then put it down to the reader?

Facial recognition is a gimmick. An interesting techie gimmick but not any more secure or convenient than fingerprint scanning.

Damn.

Any good online courses/books on the topic?

it is done mapping a shitty 3d mesh to the infrared info of the camera
if you look closely, it doesn't even work too fluidly, but at least it looks convincing

opencv.org

i don't mean to be a dick or anything, OP, i'm just letting you know that things we take for granted such as snapchat filters are actually pretty damn sophisticated and are built on top of decades of computer science work. this is far beyond "which programming language was used".
if you're just trying to be a code monkey and not really understand the subject, you can probably find easy to use SDKs.

Black people can't afford clippers

> get sucker punched by melanin enriched individual
> out cold
> he's slightly above average dark skinned intelligence
> unlocks my phone via face recognition
> gets my credit cards from face recognition
> tweets racially charged ebonix at my employer
> i get fired
> 4 digit pin code could have prevented this

if you're this worried about this scenario happening, then why don't you just add a fucking pin code?

ah right, you don't have a job

Facebook has this for quite a while

Thank you!

Yeah I'm used to it, not really a newfa/g/.

He's a poo in loo

Both Samsung and Apple couldn't get through screen fingerprint scanning working.
Samsung put the button on the back near the camera instead, which was a poor choice of placement.
Instead of Apple making the stylistically best decision of their career by turning the Apple logo on the back into the home button they just removed it completely.

Touch ID would have been just as bad.
Pin numbers, pattern unlocks and passwords are actually secure.

user who were using openCV a lot here.
I wouldn't be able to do it as precise as they did it's really nice effect. The team who did ARkit might be one of the best on apple.

>ah right, you don't have a job
I know what you are implying and you're fucking retarded.

Lots of jobless people have iPhones.

Are you in a fucking highschool underage faggot?
Try Unity fucking faggot

If I may ask, what application did you use it for?

Relax nigger, I'm just trying to learn

From a simpleton perspective, the new face measuring setup is similar to that of Kinect, where it projects dots onto a face and measures the returning scan of the dots to map a 3d space. In our case, the 3d space is the face. Similarly to how Snapchat manages to fit a face rig to the image based on eyes and mouth, the iPhone is capable of figuring out the face's shape in 3d space. That's seen in some videos as a poligonized face, denser than the one Snapchat uses. Once you have a 3d face rig, I'd assume applying the mask is the same as applying textures in any 3d modeling software. Superimpose that onto the original image, and you get what Snapchat has been doing for several years, but with a bit higher detail.
Like others said, the technicalities are way too complex to explain on one leg.

This is my biggest issue. The touch ID was smooth and one action. Now you have to click a button on the side, hold it up to your face, let it unlock, then swipe up. In the end it is a gimmick, but one that makes it cluncky.

How does it project the dots? Does it have an IR sensor? What about the one used by facebook and sanpchat? I don't imagine lower end phones will have some sort of IR sensor to project the dots?