Homeland Security Wants Facial Recognition For All Entering or Leaving US

Homeland Security Wants Facial Recognition For All Entering or Leaving US

The federal government is considering changing airport security in a major way. Facial recognition technology is being used everywhere from our iPhones to CCTV cameras in the streets. The technology is also being used for years for non-US citizens arriving in the states but it has not been a requirement for the US citizens up til now. 

But now the Homeland Security wants to expand the use of facial recognition technology for anyone entering and leaving the US. In a recent filing, the DHS proposed amending existing regulations “to provide that all travelers, including US citizens, may be required to be photographed upon entry and/or departure” from the United States, such as at airports.

Director of entry/exit policy and planning at the Department of Homeland Security, Michael Hardin, told CNN Business that for now, the rule is in the ‘final stages of clearance’. But since it hasn’t been cleared yet, the rule won’t go into effect until after a period of public comment. 

Facial recognition technology has become ubiquitous in recent years with technology becoming remarkably common in airports throughout the world. DHS has to roll out facial recognition technology to the 20 largest airports of the US by 2021. A spokesperson for Customs and Border Protection said the agency ‘will ensure that the public has the opportunity to comment prior to the implementation of any regulation and the agency was ‘committed to its privacy obligations.’ 

China Makes Facial Recognition Mandatory For Smartphone Users

China Makes Facial Recognition Mandatory For Smartphone Users

China is making it mandatory for all smartphone users who register new SIM cards to submit to facial recognition scans. The new rule went into effect on Sunday across the entire country. 

The guidelines first announced in September require telecom companies to deploy ‘artificial intelligence and other technical methods’ in order to verify the identities of people registering SIM cards. Physical stores across the entire country had time until December 1 to begin implementing the new rules.  

The Ministry of Industry and Information described the measure as a way to ‘protect the legitimate rights and interest of citizens in cyberspace’. Through mandatory requirements, Chinese mobile phone and internet users are extremely easier for the government to track. 

Already mobile phone users are obligated to register SIM cards through their identity cards or passports. Since last year, many telecoms had already begun scanning the customers’ faces. A number of social media platforms in China also require users to sign up with their ‘real identities’ through their phone numbers.

The increasing use of facial recognition in China has raised a lot of privacy concerns about information security and consent. Facial recognition is being used from middle schools to concert venues and public transport. 

Last month, the country’s first lawsuit was filed by a professor against the use of facial recognition. Guo Bing, a professor at Zhejiang Sci-Tech University claimed that a safari park in Hangzhou violated the country’s consumer rights protection law by scanning his face and taking his personal data without his consent. 

In September, China’s Education Ministry announced that it would ‘curb and regulate’ the use of facial recognition after parents became angry at the facial recognition software installed without their consent at a university in Nanjing to monitor the attendance of students and focus during class. 

Big giant tech companies in China are writing standards for the UN regarding facial recognition and video monitoring. Human rights advocates considered the measure as another step towards ‘dystopian surveillance state’. 

Chinese Tech Groups Shaping UN Facial Recognition Standards

Chinese Tech Groups Shaping UN Facial Recognition Standards

Leaked documents reveal that Chinese tech giants are developing the United Nations’ standards for facial recognition and video monitoring, reported by Financial Times. Amongst those proposing new international standards are telecommunications equipment maker ZTE, security camera maker Dahua Technology, and the state-owned Chinese telecommunication company China Telecom. The new standards are being proposed in the UN’s International Telecommunication Union (ITU) for facial recognition, video monitoring, city, and vehicle surveillance, according to the Financial Times report. 

Standards sanctioned in the Geneva-headquartered ITU which has 193 member states are very often adopted as policies by developing nations in Africa, Asia, and the Middle East. In these regions, the Chinese government has agreed to supply infrastructure and surveillance tech under its ‘Belt and Road Initiative’. 

By writing the standards, companies are able to craft the regulations to fit the specifications of their own exclusive technology which in turn gives these companies an edge in the market. 

The Chinese influence in international standards-setting bodies such as ITU and ISO has progressed in recent years as their global ambition enhances. ITU standards are highly influential in setting the rules in African countries as they don’t have the means to develop rules themselves. These standards take around two years to be drafted and adopted. As the Chinese tech companies seek to improve their facial recognition especially for people of color, data from African countries is extremely important to them. The Chinese government considers writing standards as a means of accelerating its AI leadership ambitions.

The proposals currently under discussion at the ITU have been criticized by human rights lawyers as crossing the line from technical specifications to policy recommendations. The standards being proposed by the ITU do not do enough to protect consumer privacy and data. 

Face Detection Tool to Fight Bots Under Trial by Facebook

Face Detection Tool to Fight Bots Under Trial by Facebook

Facebook is currently battling a $35 billion class-action lawsuit for alleged misuse of facial recognition data. This still hasn’t stopped the California based social network giant from testing another facial detection feature that could make users queasy. 

Hong Kong-based reverse engineering app researcher, Jane Manchun Wong, tweeted screenshots on November 5.

Face Detection Tool

 

According to Wong, the company is testing verification features that require users to place their faces in a circle and then record a video as they rotate their heads slowly. This is done to prove that they are humans and not bots. 

Facial Recognition based identity verification

According to Facebook, the video selfies will be deleted after 30 days and will not be seen by others. Facebook talked to VentureBeat about the new tool and vehemently denied the use of this tool for facial recognition. A spokesperson for Facebook told VentureBeat

“Instead, it detects motion and whether a face is in the video.”

 

Face verification online

 

Facebook talked to Endaget and asserted that although this tool is under trial, it “does not use facial recognition.” The only purpose is to detect movements to make sure you are a human and not a bot. This means that Facebook is still storing data but isn’t using it for facial recognition purposes. 

US tech giants are under unprecedented inspection over the handling and usage of client’s data due to numerous scandals. One such scandal involves Facebook and the exposure of data of millions of its users to political consultancy Cambridge Analytica. From July next year, new privacy law will be effective in California that will allow the users to know what kind of personal information companies are looking for and how they are using it. 

Facial Recognition Burgeoning Threat to Privacy

Facial Recognition: Burgeoning Threat to Privacy

The expanding use of facial recognition technology for ID verification, user authentication, and accessibility is finally coming under fire from privacy evangelists worldwide. Proponents of digital privacy are talking about user consent, data context, transparency in data collection, data security, and lastly accountability. Adherence to strict principles of privacy, as well as free speech, entails proper regulation aimed at controlled use of facial technology. 

Facial scanning systems are used for a variety of purposes: facial detection, facial characterization, and facial recognition. As a major pillar of digital identity verification, facial authentication serves as a means of confirming an individual’s identity, and stores critical user data in the process. The technology is keeping the trade-up by allowing users broader use of digital platforms and enhanced knowledge of data collection.

The Digital ID Market: A Snapshot

Digital identity verification is changing the way companies are working. In Europe alone, the expected growth of the identity verification market is found to be 13.3% from 2018 to 2027. By then, the market will have grown to US$4.4 billion. By the year 2030, the McKinsey Global Institute puts value addition by digital identification at 3 to 13 percent of GDP for countries implementing it.

 

The Digital ID Market: A Snapshot

 

At the same time, cybersecurity threats are also on the rise, indicating a glaring need for enhanced security solutions for enterprises. According to Juniper, cybercrimes have cost $2 trillion in losses in 2019 alone. By 2021, Forbes predicts this amount will triple as more and more people find ways to mask identities and engage in illicit activities online. 

As a direct consequence of this, the cybersecurity market is also expected to grow to a humongous $300 billion industry, as apprehended in a press release by Global Market Insights. 

As technological advancement fast-tracks, this figure will probably grow in proportion to the growing threats to cyberspace, both for individuals and enterprises. 

Facial Recognition Data Risks

 

Formidable forces tug at the digital user from both ends of the digital spectrum. Biometric data, while allowing consumers to avail a wide range of digital services without much friction, also continue to pose serious risks that they may or may not be aware of. 

Facial recognition data, if misused, can lead to the risks that consumers are generally unaware of, for instance,

  1. Facial spoofs
  2. Diminished freedom of speech 
  3. Misidentification 
  4. Illegal profiling

Much has been said about the use of facial recognition technology in surveillance by law enforcement agencies. At airports, public events and even schools, facial profiling has led to serious invasion of privacy that is increasingly gaining public traction. While most users are happy to use services like face tagging and fingerprint scanning on their smartphones, privacy activists are springing into action with rising knowledge and reporting of data breaches.

Let’s dig deeper into one of the most potent cybersecurity threats linked to facial recognition technology: Deepfake. 

How Deepfakes Impact Cybersecurity

 

In the world of digital security, deepfakes are posing a brand new threat to industries at large. To date, there are 14,678 deepfake videos on the internet. As barriers to the use of AI are lowered, adversaries share the same access to advanced technological capabilities as regulators. High rates of phishing attacks are targeting financial institutions, service providers and digital businesses alike. Representation of enterprises is at risk as deepfakes are fully capable of altering videos and audio without being detected. 

This has profound security implications for identity verification processes based on biometrics, which will find it harder to identify the true presence of a customer. 

With the pervasive use of evolving technology, cybercriminals will find it easier to access sophisticated tools and nearly anyone can create deepfakes of people and brands. This involves higher rates of identity threats, cyber frauds and running smear campaigns against public personalities and reputable brands. 

For facial identification software, this means fake positives created by deepfake technology can assist cyber criminals in impersonating virtually anyone on the database. Cybersecurity experts are rushing to integrate better technological solutions such as audio and video detection, in order to mitigate the impact of deepfake crimes. More subtle features of a person’s face will be recorded in order to detect impersonators. 

However, it is impossible to turn a blind eye to the raging speed at which the use of generative adversarial networks is making deepfakes harder to detect. According to experts, the underlying AI technology that supports the proliferation of such impersonation crimes is what will fuel more cyber attacks. 

Blockchain technology might also help in authenticating videos. Again, the success of this solution also depends on validating the source of the material, without which any individual or enterprise is at high risk of being maligned. 

Implications Across Users

 

Gartner warns enterprises about the use of biometric approaches to identity verification, as spoof attacks continue to riddle the digital security landscape. While popular celebrities can be exploited by incorrectly using their facial identity in pictures and videos, large corporations are also at high risk of being targeted.

Sensational announcements about the company or industry trends can lead to stock scares and other financial repercussions. Fake news and misinformation have the potential to cause meltdowns in political landscapes. Additionally, doctored videos on social media can cause an uproar among certain demographics, leading to social unrest. 

Identity Verification Technology – A win-win approach

 

With more and more companies using digital onboarding solutions, the threat of deepfakes is real and must be effectively countered. Companies are no longer looking only for identity solutions that make the best use of customer biometrics. Instead, they now have an increasing interest in how the stored information is safeguarded against burgeoning cyber threats. 

The first step in resolving digital impersonation crimes is to be fully aware of the possibilities as such. Enterprises and professionals need to be apprised of the rising misuse of digital verification software, and the likelihood of personal data being compromised. 

Face swapping technologies must now be matched with face detection software that helps identity fake videos and content that misleads. In addition, digital security solutions must be ramped up, especially those involving the use of sensitive client data. 

Biometric authentication and liveness detection solutions

 

Liveness detection, as an added feature of facial recognition, provides an efficient solution to deepfakes as fraudulent attempts at using past photos/videos to bypassing biometric identification increase. The same technology behind deepfakes can also be employed to counter frauds and spoof attacks, to ensure that personal data is not compromised for cybercrime. 

Differentiating between spoofs and real users became easier as additional layers of security are added to the verification process. Users are required to appear in front of a camera and capture a selfie or a live video. 

Shufti Pro performs biometric analysis to validate true customer presence, with markers that check for eyes, hair, age, and color texture differences. Coupled with microexpressions analysis, 3D depth perception and human face attributes analysis, this ID verification process ensures maximum protection against digital impersonators. 

More on Liveness Detection as an AntiSpoof measure here