new study by nist reveals

New Study by NIST Reveals Biases in Facial Recognition Technology

The National Institute of Standards and Technology (NIST) recently did a study on the effects of race, age and sex on the facial recognition software. The results showed significant biases in the software for the people of color and women. 

The report’s primary author and a NIST computer scientist, Patrick Grother, said in a statement, 

“While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied.”  

This means that the algorithms misidentified people of color more than white people and they also misidentified women more than men. 

The study done by NIST is pretty robust. NIST evaluated 189 software algorithms from 99 developers, “a majority of the industry”, using federal government data sets which contains roughly 18 million images of more than 8 million people.

The study evaluated how well those algorithms perform in both one-to-one matches and one-to-many matches. Some algorithms were more accurate than others, and NIST carefully notes that “different algorithms perform differently.” 

The higher rates of false positives were in the one-to-one matching scenario for Asian and African American faces compared to Caucasian faces. And this effect was remarkably dramatic as well in some instances with the misidentifications 100 times more for the Asian and African American faces compared to their white counterparts. 

The study also showed that the algorithms resulted in more false positives for women than men and more false positives for the very young and very old compared to the middle-aged faces. In the one-to-many scenario, African American women had higher rates of false positives. 

The study also found out that the algorithms developed in Asian countries didn’t result in the same drastic false-positive results in the one-to-one matching of Asian and Caucasian faces. According to NIST, this shows that the impact of the diversity of training data, or lack thereof, on the resulting algorithm. 

“These results are an encouraging sign that more diverse training data may produce more equitable outcomes, should it be possible for developers to use such data,” Grother said.

 

 

Facebook New AI Can Help You Circumvent Facial Recognition

Facebook’s New AI Can Help You Circumvent Facial Recognition

Facial recognition technology is primarily used to detect and identify people but in a turn of events, Facebook has created a tool that fools the facial recognition technology to wrongly identify someone. Facebook’s Artificial Intelligence Research Team (FAIR) has developed an AI system that can “de-identify_ people in pre-recorded videos and live videos in real-time as well. 

This system uses an adversarial auto-encoder paired with a trained facial classifier. An auto-encoder is an artificial neural network that studies a description for a set of data unsupervised. Classifiers usually employ an algorithm to chart input data. The face classifier deals with data associated with facial images and videos. This slightly distorts a person’s face in such a way as to confuse facial recognition systems while maintaining a natural look that remains recognizable by humans. The AI doesn’t have to be retrained for different people or videos and there is only a little time distortion. 

According to Facebook, “Recent world events concerning advances in, and abuse of face recognition technology invoke the need to understand methods that deal with de-identification. Our contribution is the only one suitable for video, including live video, and presents a quality that far surpasses the literature methods.” 

Over the course of some years, deepfake videos have become common in which a person’s face can be edited into videos of other people. These deepfake videos have become so convincing and advanced that it can be difficult to tell the real ones from the fake ones.  

This de-identification program is built to protect people from such deepfake videos. 

In the paper, the researchers Oran Gafni, Lior Wolf, and Yaniv Taigman talked about the ethical concerns of facial recognition technology. Due to privacy threats and the misuse of facial data to create misleading videos, researchers have decided to focus on video de-identification. 

In principle, it works quite similarly to face-swap apps. This involves using a slightly warped computer-generated face through past images of them and then put on their real one. As a result, they look like themselves to a human but a computer cannot pick up vital bits of information which it could from a normal video or photo. You can watch this in action in this video here. 

According to the team, the software beat state-of-the-art facial recognition and was the first one to have done so on a video. It could also preserve the natural expressions of the person and was capable of working on a diverse variety of ethnicities and ages of both genders. 

Even though the software is extremely compelling, don’t expect this to reach Facebook anytime soon. Facebook told VentureBeat that there were no intentions to implement the research in its products. But with that said, the practical applications of the research are pretty clear. The software could be used to automatically thwart third parties using facial recognition technology to track people’s activity or create deepfakes. 

This research comes at such a time when Facebook is battling a $35 billion class-action lawsuit for alleged misuse of facial recognition data in Illinois. Facebook isn’t the only one working on de-identification technology. D-ID recently released a Smart Anonymization platform that allows clients to delete any personally identifying information from photos and videos. 

remarkable

Remarkable Progress in Facial Recognition Revealed by a New Study

Could you have imagined a few years ago that you could be identified from your brain scan? The idea seems striking and close to impossible but a new study has proven its possibility.  According to new research by the Mayo Clinic researchers, facial recognition has become so advanced that it can identify you by your MRI scan. The study utilized the facial recognition software to match the photos of volunteers with the medical scans of their brains. 

How was the Study Conducted? 

MRI (Magnetic Resonance Imaging) is an imaging procedure used to produce pictures of the body to observe the anatomy and physiological processes. The resulting image of an MRI of the brain usually includes an outline of the head, including skin and fat but not bone or hair. It is used to recognize conditions of the brain and spinal cord including aneurysms, strokes and other issues. MRI includes the entire head, including the patient’s face. While the appearance on an MRI scan is blurry, the imaging technology has grown to such a point that the face can be reconstructed from the scan. This face can then be matched to an individual with facial recognition software. 

This study was published in The New England Journal of Medicine on October 24, 2019. The number of volunteers used in the research was 84 and the study was conducted after noticing the high quality of images used to study the brains of patients with Alzheimer’s and dementia. 

After the participants agreed to volunteer in this research, a team led by Christopher Schwarz, a computer scientist at Mayo Clinic, photographed their faces. A computer program was separately used to reconstruct faces from the MRIs. The facial recognition software was then employed and the software correctly identified 70 of the subjects. The researchers employed commonly available facial recognition software for the study. The software was able to correctly match patients with their MRI images 83% of the time. 

The Privacy Threat is Real 

 

Although this was a simple test because of the small number of participants, this is still a worrisome advancement in technology. According to Schwarz, this development can pose a lot of risks to the patients who have had their MRIs like exposing family medical history, certain diseases and genetic information. The privacy threat is real according to Dr Michael Weiner of the University of California, San Francisco. 

Facial Recognition has reached new heights as evident in this study. Unfortunately, privacy laws are not advancing as fast as technology. With this study, it becomes apparent that the medical records of the patients can easily identify them. Facial recognition technology is also being used by the police for surveillance. Banks and financial institutes use facial recognition for security and verification measures. This wide range of usage warrants the need for privacy laws to make sure the technology is used for mankind, not against it.