Kin Face authentication algorithm in action (development stage) — Given a set of face images, can we authenticate and cluster them based on unique users? Top row from left to right: Patrick Ewing Jr., Gene Hackman, George H.W. Bush; bottom row: George W. Bush, Eddie Murphy and Patrick Ewing Sr.

When it comes to e-currency, identity theft is certainly one of the biggest fears out there.

Back when transactions were conducted in-person at banks, bank clerks required physical identification to ensure that the person requested the transaction was indeed who they said they were. This is likely the most effective way to check identity, but in today’s online world, this is hardly an option. However, as a replacement, we can use our mobile device’s camera and speaker to create an augmented/virtual “clerk” that will verify us using our face and voice.

Our goals with facial authentication are as follows:

Create a simple user experience that also keeps the wallet safe. “You, and ONLY You,” can make transactions in the account. Reduce the usage of stored private keys, pin codes, passwords, pass-phrase etc

I would like to introduce the Kin advanced media and learning team. We are focusing on visual algorithms and machine learning to address some of the more complex long term challenges of bringing crypto to the masses. Our team is investigating several techniques based on 3D face identification in order to:

Identify and authenticate the user securely. Generate a unique passphrase based on a user’s face, reducing the need to store a decoded password on the client/server side. This also means that the user won’t have to remember a long alphanumeric password or private key. Your face is the password. This passphrase can be used in addition to other security mechanisms to provide a complete solution.

Our current approach works as follows:

Given a set of images from device’s camera, identify and crop the face part using standard face detection mechanism.

Generate a face descriptor vector (usually a 128D/256D size)

face descriptor vector (usually a 128D/256D size) Cluster faces with close descriptor vectors

If all faces are from the same cluster, then most likely this is the same person.

The face descriptor vector can be used as a main component for the passphrase. If the algorithm manages to separate unique people to a unique cluster, then we can acquire a unique password for each user (without the need to store it).

For testing, we ran the algorithm on several hundreds mobile images of Kin developers taken from different angles and different devices, in addition to celebrity images from Google Images.

As illustrated in the image above, our “clients” were former NBA player Patrick Ewing Sr., using photos of him at various ages, and his son, so we could check relative/family closeness.

Additionally, we also looked at both Presidents George H.W. Bush and George W. Bush for the same reasons as the Ewings. Lastly, we looked at comedian Eddie Murphy, who is known for creating different characters with various types of makeup, and actor Gene Hackman, with and without his mustache and glasses. You can notice the different camera/face angles of each person, as well as diversity in hair and skin color.

The accuracy of the algorithm is determined by selecting the delta distance for which two face descriptors are close enough. When we used a higher factor, less clusters were created, thus different people were in the same cluster. A lower factor on the other hand, created many cluster of a single face, meaning the system didn’t recognize the same person twice.

Moderate grade of uniqueness is another approach we are examining:

If the difference between the two vectors is below initial delta (e.g., 0.2) then the grade is 100

If the difference is within the range of 0.2 and 0.4 , the grade is 90

If the difference is higher than 0.5, the grade is 70.

The system will authenticate a set of images as a single user if the average grade of the images is above 85 for instance.

But, a standard face detection is not enough, since an adversary (person or internet bot) can always find a photo of the user and use it to login to the wallet. Alternatively, 3D face detection technology that takes the device sensors (orientation, accelerometer, gyroscope, etc.) into account and different view angles will require an identification from multi-images with different camera/face angles. This makes the hacking task much harder (although not impossible as the latest iPhone X face ID was compromised). Face interaction, like opening your mouth, blinking, and sound (identifying your voice), can also be added to improve authentication.

While the latest mobile phones boast advanced security and authentication capabilities such as face identification, fingerprint, and pattern lock, we must ensure the safety and security of the wallet regardless of the user’s mobile device.

There are still open questions that we need to address. Are all faces unique? What happens with twins or family members? Will the application recognize me with a beard, mustache or different haircut?

We are working to answer these while also keeping in mind that any defensive system can still have vulnerabilities and weak points for hackers. The best solution will probably be a set of few security/verification layers as described in our previous post

Feedback, questions or idea — please feel free to respond and comment in our

Telegram technology channel.