BEFORE YOU GO...
Check how Shufti Pro can verify your customers within seconds
Request DemoNo thanks
Facial recognition technology is primarily used to detect and identify people but in a turn of events, Facebook has created a tool that fools the facial recognition technology to wrongly identify someone. Facebook’s Artificial Intelligence Research Team (FAIR) has developed an AI system that can “de-identify_ people in pre-recorded videos and live videos in real-time as well.
This system uses an adversarial auto-encoder paired with a trained facial classifier. An auto-encoder is an artificial neural network that studies a description for a set of data unsupervised. Classifiers usually employ an algorithm to chart input data. The face classifier deals with data associated with facial images and videos. This slightly distorts a person’s face in such a way as to confuse facial recognition systems while maintaining a natural look that remains recognizable by humans. The AI doesn’t have to be retrained for different people or videos and there is only a little time distortion.
According to Facebook, “Recent world events concerning advances in, and abuse of face recognition technology invoke the need to understand methods that deal with de-identification. Our contribution is the only one suitable for video, including live video, and presents a quality that far surpasses the literature methods.”
Over the course of some years, deepfake videos have become common in which a person’s face can be edited into videos of other people. These deepfake videos have become so convincing and advanced that it can be difficult to tell the real ones from the fake ones.
This de-identification program is built to protect people from such deepfake videos.
In the paper, the researchers Oran Gafni, Lior Wolf, and Yaniv Taigman talked about the ethical concerns of facial recognition technology. Due to privacy threats and the misuse of facial data to create misleading videos, researchers have decided to focus on video de-identification.
In principle, it works quite similarly to face-swap apps. This involves using a slightly warped computer-generated face through past images of them and then put on their real one. As a result, they look like themselves to a human but a computer cannot pick up vital bits of information which it could from a normal video or photo. You can watch this in action in this video here.
According to the team, the software beat state-of-the-art facial recognition and was the first one to have done so on a video. It could also preserve the natural expressions of the person and was capable of working on a diverse variety of ethnicities and ages of both genders.
Even though the software is extremely compelling, don’t expect this to reach Facebook anytime soon. Facebook told VentureBeat that there were no intentions to implement the research in its products. But with that said, the practical applications of the research are pretty clear. The software could be used to automatically thwart third parties using facial recognition technology to track people’s activity or create deepfakes.
This research comes at such a time when Facebook is battling a $35 billion class-action lawsuit for alleged misuse of facial recognition data in Illinois. Facebook isn’t the only one working on de-identification technology. D-ID recently released a Smart Anonymization platform that allows clients to delete any personally identifying information from photos and videos.