Enhancing security in the cryptocurrency world with KYC verification

Enhancing security in the cryptocurrency world with KYC verification

Almost 20 years after the legislation on combating money laundering, regulators around the world are working to create global standards for the Know Your Customer (KYC) rule. These standards are applied now, in particular, to the financial and technological sectors and cryptocurrencies. The technology sector, which began with idealized anonymous peer-to-peer payments, now takes into account the security of traditional finance, which means compliance with KYC regulations.

Even though technology in some aspects has stepped far forward, the attitude of cryptocurrency to KYC was sometimes dismissive, and in some cases criminally negligent. The attitude is changing, but the debate over how the KYC rules and cryptocurrency interact is only flaring up.

What is Know Your Customer verification?

Due to the digitization of the international financial system and the increasingly stringent regulation of the industry, regulatory compliance is on the rise. What used to be a secondary area that occasionally caused a headache for investment bankers and traders is now becoming an important centre for big data processing.

KYC is the process of checking who your customers are, either they are who they claim to be or someone else. In the period from 2000 to 2010 in most jurisdictions: in the USA and Canada, in most European countries, South Africa, Russia, India, Singapore, South Korea, China, and Japan legislation was passed for KYC and AML regulation (anti-money laundering laws). As a result, banks and related financial institutions began to comply with the requirements of anti-money laundering legislation.

Cryptocurrency exchanges are currently considering fiat ramps as the main component of their product. Such changes have led to dependence on banks and payment systems, requiring the same level of compliance that they adhere to.

For almost all organizations involved in payments, KYC rules are designed to prevent criminal activities such as fraud, money laundering, terrorist financing, the use of stolen funds, bribery, corruption, and other suspicious financial transactions. Today, most of the KYC rules are associated with strict regulatory compliance, often in the name of protecting consumer rights. But ultimately it is a risk management process.

Often, large companies themselves manage their own KYC rules, this is done by the staff. Small companies, by contrast, outsource verification processes to third parties. Regardless of who is involved in KYC, the process usually does not change. Clients must send identification documents: address, bank statements, and sometimes explain the source of their funds.

Keeping confidential documents is as important as using them. In the pre-cloudy era, banks had to duplicate documents to insure themselves against the loss of a document. Files were copied and stored on various unrelated servers.

Thanks to Amazon and other cloud storage providers, institutions and third-party verification providers now encrypt AES-256 KYC guidelines and securely store them on cloud servers like Amazon S3.

KYC in cryptocurrency

In the early years of Bitcoin and the first cryptocurrency exchanges, the KYC rule was practically not known. Users could make transactions without revealing their identity, often without even creating an account.

Now, most major exchanges and crypto-financial services providers are facing pressure from government agencies and have taken appropriate measures to implement the KYC principles. Users expressed their dissatisfaction, but they are already faced with these processes and accept them in their daily lives. Cryptocurrency is still regarded as an anarchist financial instrument within its community. And yet, when it comes to shareholders, business needs to play by the rules to attract investment and achieve faster growth. 

Despite the work being done to eradicate illegal or unethical behaviour and legitimize the industry, more than two-thirds of cryptocurrency exchanges last year still did not adopt sufficient and adequate KYC principles. 

Those who continue to work without applying the KYC rule will be constantly at risk, and most likely they will have to work in obscure or secret jurisdictions that protect themselves from lawsuits. This approach is unlikely to fill users with confidence. Ultimately, they will gain access to “dark money” and a small number of law-abiding, privacy-conscious consumers. As a result of this, the expansion of the exchange will become virtually impossible.

Decentralized Exchanges (DEX) was once seen as the next wave of cryptocurrency exchanges. It is understood that anonymous peer-to-peer trading can solve many problems with centralized exchanges, including the need for KYC rules. Large exchanges have already implemented their versions of DEX, but they have selectively chosen the advantages of decentralization: lower infrastructure costs increased security and user-controlled tools. 

Cryptocurrency is a world-class game. If you want to conquer new markets, you must demonstrate achievements in the field of corporate responsibility. Selective disintegration is the best answer to this challenge. Finding a suitable provider that can ensure uninterrupted operation with minimal cost and guarantee data security is the real challenge that the industry is facing.

The future of KYC

We are currently witnessing the advent of RegTech 3.0, a regulatory technology that digitizes a wide range of regulatory compliance processes. RegTech technology is designed to reduce costs, improve consumer protection and identify risks long before regulatory authorities intervene. This technology uses a combination of new technologies, such as artificial intelligence, machine learning, RPA (robotic automation of technological processes and production) and biometrics, but at the same time signals significant changes in the strategic direction. Instead of an isolated review of consumers and their behaviour, regulatory compliance is now seen as a valuable source of data.

Self-governing identity is another concept actively explored by researchers in the blockchain world who are looking for an alternative to centralized identity on the Internet, where verifiers, not verified ones, are under control.

But for the time being, machine learning and biometrics are considered as the potential technologies to enhance online security and verify the identity of customers. With tangible results in the form of customer satisfaction and data security, investment in better technologies is recognised as a key to an efficient financial ecosystem. By using artificial intelligence for real-time and swift KYC verification, the crypto industry can get a real push. 

AI face recognition for total automation

AI face recognition for total automation

Face recognition is everywhere but still we’re unable to say goodbye to document, maybe because we’re still far away from total automation that we wish to achieve with this technology. 

A research stated that the global facial recognition market is expected to grow at Cumulative Annual Growth Rate (CAGR) of 16.6% reaching a record-high value of USD 7.0 billion by 2024. This growth is quite justified as we see face recognition everywhere in our day to day life. 

Nations such as U.S., U.K, EU, China, etc are leading the convoy of these technological advancements where one’s face will replace the identity documents. Face verification solutions are used everywhere, from our mobile phones, and Facebook, to crime control agencies and airports. But this awesome technology has unique limitations and benefits for different industries where it’s used, that will be explored in this blog. 

Let’s explore some use-cases of face recognition. 

Travel is becoming document-free

An increasing amount of airports are using face verification solutions to verify the identity of their passengers. As airports are commonly used for huge crimes such as human trafficking, money laundering, and drug trafficking. The documents are losing their touch which leads the airports to use advanced technologies for passenger screening. Passenger security and their travel experience are equally significant and face verification satisfies both these requirements. Accuracy rate as high as 98.67% is achieved with facial recognition and results are delivered within 15-60 seconds.  It enhances customer experience, reducing the time consumed for security processes. 

Given these valuable benefits of face recognition technology, it’s predicted that 97% of airports will roll out this technology by 2023. But this is not the end of the story, because the airports are lacking in using this technology throughout the passenger flow in the airports. Maybe because they lack resources that Hartfield-Jackson Airport holds. This airport in Atlanta uses a multi-faceted facial recognition system that scans the passengers to verify their identities at various check-points in the airport. This airport provides an ultra-modern view of what the future holds for the travel industry. 

Businesses utilizing facial recognition

Businesses are always in a bid to deliver the best to their customers, especially the ones that operate in totally private sectors, with no government players. Industries such as e-commerce, Fintech, Regtech, identity verification, and blockchain are trending as they’re carving the future of how businesses will be conducted in the future. 

These industries are the primary users of next-generation technologies due to nature of their products and services. The identity verification industry uses it in customer due diligence and identity screening solutions. These solutions use face verification along with document screening to ensure fool-proof security and seamless consumer onboarding. Fintech and blockchain ventures are using the solutions of the identity verification industry to onboard their remote customers without any false positives. 

Other than that, some tech giants such as Google, Facebook and Amazon are also using in-house face recognition systems. Mobile phone producers are also using face recognition to give a better experience to their consumers. All these unconventional use-cases of this technology have made it a household term. 

Face recognition for crime control

Crime control authorities are always in a bid to bridge the loopholes in social infrastructure which may lead to criminal activities. FBI uses facial recognition to identify the suspects, and if a match is found in the database of criminals, it can be used as the proof l to pursue a lawsuit against him. 

Also, the police department in some states of the U.S uses face recognition technology to identify the criminals in the recordings of public cameras. The police departments are still working on this initiative as the social activists have found privacy related loopholes in it. 

What are the hurdles in the growth of face recognition technology

Regulatory authorities have not given a free hand to the entities in using this technology. There are some restrictions on the use of public data. 

The Anti-surveillance ordinance signed by San Francisco’s Board of Supervisors banned the city agencies to use face recognition technology in 2019. Other states that ban the use of surveillance camera recordings to find the suspect are Somerville, Oakland, and San Diego. 

The best solution to overcome these hurdles is to use the technology with care and to handle the customer data with care. 

Customer data and privacy is the primary concern of the regulatory authorities hence the reason the laws such as GDPR and CCPA are introduced to control the use of customer data by the private sector. 

So far the private sector is the one that is least affected by the hurdles in the use of facial biometrics, compliance with data protection regulations and cautious usage of customer data will lead to the projected growth of the facial recognition industry. 

To wrap up, businesses from every corner of the world are onboarding remote customers and initiating business relations with global business entities. Face recognition solution enables them to onboard customers and to allow secure login to them in the future. As the businesses have a bigger and clear field to play, fully utilizing the potential of this technology controlled with data protection practices will lead to retainable growth. As for the public sector and government agencies, the future is bright one they have developed reliable in-house solutions.

Facial Recognition Market to Grow 12.5% Annually Through 2024

Facial Recognition Market to Grow 12.5% Annually Through 2024

According to a new report by Mordor Intelligence, the facial recognition market is expected to grow at an annual growth rate of 12.5% for the forecast period 2019 to 2024. In 2018, the market was valued at $4.51 billion and by the end of 2024, it is expected to reach a value of $9.06 billion by 2024, according to the press release. 

Facial recognition is quickly picking the pace and will be surpassing fingerprint scans in the future. At present, around 94% of smartphones feature fingerprint sensors but this is expected to drop to 90% by 2023. 

Facial recognition

(Image Courtesy: Mordor Intelligence)

The increase in the 3D cameras market is also expected to bring advancements and new applications for 3D facial recognition technology. The areas of healthcare, commerce, payments, and IT solutions are benefitting a lot. 

Facial recognition systems are also being adopted for widespread mass surveillance to enhance safety and security. This is another reason for the increased market for facial recognition. Government-led initiatives are also contributing to the double-digit growth of such technologies.  

North America is expected to hold the highest market share for facial recognition technology as it offers huge opportunities for homeland security and criminal investigations. The biggest facial recognition system is being operated in North America by the FBI. The ID system of FBI maintains a database with data on more 117 million Americans and conducts an average of 4055 searches every month to identify individuals. In 2017, the US alone witnessed 1579 data breaches and 8% of the data breaches were reported by financial institutions.  Due to these factors, facial recognition technology to provide a more enhanced layer of security is imperative. 

digital currency

Digital Currency ‘Sand Dollar’ Launched by the Bahamas

The Central Bank of the Bahamas (CBOB) has introduced a digital version of the Bahamian dollar, starting with a pilot phase in Exuma in December 2019. The digital currency will then be extended to Abaco in the first half of 2020.

This initiative is termed as ‘Project Sand Dollar’ and the digital currency is also the name given to the proposed central bank digital currency (CBDC). The initiative is a continuation of the Bahamian Payments System Modernization Initiative (PSMI) which began in the early 2000s. 

The CBB said in a statement, “The Bahamian PSMI targets improved outcomes for financial inclusion and access, making the domestic payments system more efficient and non-discriminatory in access to financial services.”

The bank did mention that the digital currency is not a stable coin or a cryptocurrency but is just a digital version of the existing paper currency. The intention behind the digital currency is to help smooth things over for people who don’t have access to a physical bank. 

The press release further stresses that the bank is doing its best to make sure the services are available to everyone and streamlined to be as fast as they can be. This process includes accelerating payments system reform, adding new categories of financial service providers and using the digital payments infrastructure to ensure the accessibility of traditional banking services for everyone. 

There will be certain limitations on the sand dollar as well. For now, businesses cant have more than $1 million in their digital accounts and residents max out at $500. Businesses aren’t allowed to transact more than one-eighth of their digital wealth per month as well. 

According to the Governor of CBOB, John Rolle, the conditions are favorable for the more widespread adoption of electronic payment systems. 

homeland security takes

Homeland Security Takes Back Its Plans of Facial Recognition for US Citizens

The Department of Homeland Security (DHS) is taking back its decision to implement a policy that would require all US citizens to undergo facial recognition scans while entering or leaving the US. The policy was introduced last week and required US citizens to have their faces scanned and added to a biometric database. 

Now, the US citizens will not be required to participate in facial recognition scans at airports with DHS retracting the policy. Customers and Border Protection (CBP) said on Wednesday that the reversal in policy was the result of conversations with ‘privacy experts’, lawmakers and travel-industry stakeholders. 

John Wagner, a Border Patrol said in a statement, “CBP is committed to keeping the public informed about our use of facial comparison technology. We are implementing a biometric entry-exit system that protects the privacy of all travelers while making travel more secure and convenient.”

Non-US citizens are already required to undergo facial recognition scans when entering the United States. When it was announced last week that CBP would require US citizens to go through facial recognition scans as well, the proposed rule was met with backlash from privacy and human rights advocates. 

American Civil Liberties Union analyst, Jay Stanley, said in a statement, 

“This proposal never should have been issued, and it is positive that the government is withdrawing it after growing opposition from the public and lawmakers.” 

The full statement regarding the rule reversal can be read here. 

Homeland Security Wants Facial Recognition For All Entering or Leaving US

Homeland Security Wants Facial Recognition For All Entering or Leaving US

The federal government is considering changing airport security in a major way. Facial recognition technology is being used everywhere from our iPhones to CCTV cameras in the streets. The technology is also being used for years for non-US citizens arriving in the states but it has not been a requirement for the US citizens up til now. 

But now the Homeland Security wants to expand the use of facial recognition technology for anyone entering and leaving the US. In a recent filing, the DHS proposed amending existing regulations “to provide that all travelers, including US citizens, may be required to be photographed upon entry and/or departure” from the United States, such as at airports.

Director of entry/exit policy and planning at the Department of Homeland Security, Michael Hardin, told CNN Business that for now, the rule is in the ‘final stages of clearance’. But since it hasn’t been cleared yet, the rule won’t go into effect until after a period of public comment. 

Facial recognition technology has become ubiquitous in recent years with technology becoming remarkably common in airports throughout the world. DHS has to roll out facial recognition technology to the 20 largest airports of the US by 2021. A spokesperson for Customs and Border Protection said the agency ‘will ensure that the public has the opportunity to comment prior to the implementation of any regulation and the agency was ‘committed to its privacy obligations.’ 

China Makes Facial Recognition Mandatory For Smartphone Users

China Makes Facial Recognition Mandatory For Smartphone Users

China is making it mandatory for all smartphone users who register new SIM cards to submit to facial recognition scans. The new rule went into effect on Sunday across the entire country. 

The guidelines first announced in September require telecom companies to deploy ‘artificial intelligence and other technical methods’ in order to verify the identities of people registering SIM cards. Physical stores across the entire country had time until December 1 to begin implementing the new rules.  

The Ministry of Industry and Information described the measure as a way to ‘protect the legitimate rights and interest of citizens in cyberspace’. Through mandatory requirements, Chinese mobile phone and internet users are extremely easier for the government to track. 

Already mobile phone users are obligated to register SIM cards through their identity cards or passports. Since last year, many telecoms had already begun scanning the customers’ faces. A number of social media platforms in China also require users to sign up with their ‘real identities’ through their phone numbers.

The increasing use of facial recognition in China has raised a lot of privacy concerns about information security and consent. Facial recognition is being used from middle schools to concert venues and public transport. 

Last month, the country’s first lawsuit was filed by a professor against the use of facial recognition. Guo Bing, a professor at Zhejiang Sci-Tech University claimed that a safari park in Hangzhou violated the country’s consumer rights protection law by scanning his face and taking his personal data without his consent. 

In September, China’s Education Ministry announced that it would ‘curb and regulate’ the use of facial recognition after parents became angry at the facial recognition software installed without their consent at a university in Nanjing to monitor the attendance of students and focus during class. 

Big giant tech companies in China are writing standards for the UN regarding facial recognition and video monitoring. Human rights advocates considered the measure as another step towards ‘dystopian surveillance state’. 

Chinese Tech Groups Shaping UN Facial Recognition Standards

Chinese Tech Groups Shaping UN Facial Recognition Standards

Leaked documents reveal that Chinese tech giants are developing the United Nations’ standards for facial recognition and video monitoring, reported by Financial Times. Amongst those proposing new international standards are telecommunications equipment maker ZTE, security camera maker Dahua Technology, and the state-owned Chinese telecommunication company China Telecom. The new standards are being proposed in the UN’s International Telecommunication Union (ITU) for facial recognition, video monitoring, city, and vehicle surveillance, according to the Financial Times report. 

Standards sanctioned in the Geneva-headquartered ITU which has 193 member states are very often adopted as policies by developing nations in Africa, Asia, and the Middle East. In these regions, the Chinese government has agreed to supply infrastructure and surveillance tech under its ‘Belt and Road Initiative’. 

By writing the standards, companies are able to craft the regulations to fit the specifications of their own exclusive technology which in turn gives these companies an edge in the market. 

The Chinese influence in international standards-setting bodies such as ITU and ISO has progressed in recent years as their global ambition enhances. ITU standards are highly influential in setting the rules in African countries as they don’t have the means to develop rules themselves. These standards take around two years to be drafted and adopted. As the Chinese tech companies seek to improve their facial recognition especially for people of color, data from African countries is extremely important to them. The Chinese government considers writing standards as a means of accelerating its AI leadership ambitions.

The proposals currently under discussion at the ITU have been criticized by human rights lawyers as crossing the line from technical specifications to policy recommendations. The standards being proposed by the ITU do not do enough to protect consumer privacy and data. 

The Definitive Guide to Anti-Money Laundering & Countering of Terrorist Financing

The Definitive Guide to Anti-Money Laundering & Countering of Terrorist Financing

In this modern globalized era, money launderers, terrorist financiers and other criminal elements came up with a slew of resourceful ways to accomplish their malicious desires. It is a common practice of these groups to misuse the services of legitimate businesses such as banks and other Financial Institutions (FIs) to convert ill-gotten gains into ‘good money’. Whereas, to counter such criminal activities, FIs rely on procedures and systems that aim at acquiring customer knowledge.

One of the major issues is that most legitimate entities turn out not to be compliant with the AML (Anti-Money Laundering) regulations. This increases the probability of bad actors to finance terrorists and drug dealers. Any legal entity that knowingly or unknowingly became part of money laundering or terrorist financing will suffer from enormous regulatory penalties. Therefore, it is crucially important for FIs to establish a strong internal system of controls that, even when criminals are using the best effort and abilities to elude the rules. It allows them to identify fraudulent entities and unusual money flows. 

When an entity makes substantial profits, it finds ways to use or save funds without moving the attention of inspectors on underlying suspicious activity or on criminal entities that are doing so. In money laundering, criminals disguise their sources of money, change the form or transfer it to a place that seems less likely to grab attention. Embezzle funds are converted into good money to ‘enjoy it’. 

Palermo Convention or United Nations Conventions Against Transactional Organized Crime states money-laundering as:

“The conversion or transfer of property intentionally knowing that it is a proceed of crime, to conceal the illicit origin of money or helping an individual who is involved in predicate offence and wants to evade legal consequences of his action.”

“The concealment of the true source, nature, location, movement, ownership, property or disposition, intended that it a proceed of crime.”

“The acquisition, ownership or use of property, which at the time of receipt was known that it is a proceed of crime.”

Financial Action Task Force (FATF) is an inter-governmental body established in 1988 by a group of seven industrialized nations to combat money laundering. FATF cleared the notion that money laundering only takes place with cash transactions. Actually, it’s not the only case. Money laundering can be performed by any medium virtually, could be a financial institution or any business. 

Sources of Dirty Money

In simple words, money laundering means “the conversion of dirty money into good money.”

Following are some of the sources of dirty money:

  • Drugs and arms Trafficking
  • Criminal Offences
  • Gambling
  • Smuggling
  • Bribe
  • Online fraud
  • Tax evasion
  • Kidnapping and many more…

Methods and Stages of Money Laundering

Money laundering involves three stages; placement, layering, and integration.

  • Placement

This process is the movement of illicit funds from its source. At that time, the origin is manipulated or concealed. This process is followed by money circulation through FIs, shops, casinos, legal sector, or other businesses (both abroad and local). In simple words, in this phase, illegal funds get introduced into the financial system.

The process of placement includes many other methods:

Currency Smuggling: The physical movement of currency out of the country.

Bank Complicity: When FIs are involved with criminal entities such as drug dealers or organized criminal groups. This makes money laundering an easy process. Lack of AML procedures and checks also pave ways for money launderers. 

Currency Exchanges: Foreign currency exchange service providers open ways for money launderers for seamless currency movements.

Securities Brokers: The money laundering process can be facilitated by brokers by structuring enormous funds in such a way that it conceals the original source of illicit money.

The blending of Funds: A small number of illicit funds are blended with a huge deposit of legal funds. 

Asset Purchase: Assets are purchased with dirty money. This process is the most common method to hide dirty money. The real estate sector is misused by money launderers and real estate agents knowingly or unknowingly facilitate bad actors.

  • Layering

This process involves the transfer of funds to several accounts or FIs’ accounts to further separate the original money source. This makes complex layers of transactions that help conceal the source and ownership of illegal funds. Hence, makes it difficult for law enforcement agencies to track the money flow. 

The methods of layering include;

Cash Conversion into Monetary Instruments: After the successful placement of money into FIs or banks, proceeds are transformed into monetary instruments. In this, the banker’s money orders and drafts are involved. Material assets are bought with this cash and sold locally or abroad. In this way, assets tracking becomes more difficult to trace.

  • Integration 

In this process, laundered money is moved into the economy through the banking system. Such money appears just like business earned money. In the integration process, laundered funds are detected and identified through informants. 

Integration methods include;

Property Dealing: Among criminals, property sale to hide dirty money is a common practice. For instance, criminal groups buy properties using shell companies.

Front Companies and False Loans: Front companies, incorporated with secrecy laws in the countries are used by money launderers that lend them laundered proceeds that appear to be legitimate.

Foreign Bank Complicity: Money laundering is conducted using foreign banks. It gets hard for law enforcement agencies to point them out due to their sophisticated systems. 

False Invoices: Import/export companies create false invoices and have proven to be an effective way of hiding illicit money. This method includes the overvaluation of products to justify the funds. 

This is today one of the major threats we are facing. Who knows, if our services are used for terrorist financing? Even, sometimes the legally earned money is also transferred for the financing of terrorism. 

For terrorists, no matter how small the money amount is, it is a lifeblood for them.

Just like money laundering, terrorist financing is a predicate offence. Early detection and immediate counter steps are the only ways to combat it. 

For terrorists, no matter how small the money amount is, it is a lifeblood for them.

Concerns of Countries and Governments around the World

United Kingdom

MLR-2017, the Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) regulations came into force in the UK on June 26, 2017. The new regulation of the UK is tightening the reins on money laundering in resourceful ways. 

To combat money laundering, UK regulations include identity verification of customers before providing services to them. AML compliance is mandatory when it comes to screen the customers against Politically Exposed Persons (PEPs) list, sanction lists, the high-risk customers’ records, and updated criminal databases. In addition to this, employee training is also declared mandatory. Previously, regulations covered only casinos holders which now extend to all gambling providers.

China

Anti-money Laundering (AML) regulations in China primarily focus on KYC (Know Your Customer) verification of customers through identity verification protocols. China’s government has issued AML/CFT regulations on online financial institutions. FATF report for the People’s Republic of China states that China has a strong understanding of money laundering and terrorist financing risks. 

In AML/CFT regulations of China, legitimate entities are required to verify their customers with identity proof (such as government-issued ID cards). In addition to this, regular identity checks are declared important in case of a change in records of beneficiaries or regulations. In the case of any suspicious transactions crossing the minimum transaction threshold, it should be reported immediately to the relevant authority. China is taking stringent measures in the AML compliance program to combat money laundering and terrorist financing criminal activities.

The United States of America

In the USA, Bank Secrecy Act (BSA) is residing. With several amendments, this act is quite detailed and covers broad perspectives of money laundering risks of financial institutions. BSA was designed to identify the source, volume, and movement of laundered money and monetary instruments. According to BSA, banks and other financial institutions are supposed to report transactions over $10,000 through currency transaction reports. 

Not only this, CDD processes are mandatory for businesses operating in the USA. AML screening of customers against several criminal databases are updated records is necessary to comply with AML regulations. Additional federal laws are passed to strengthen the rules and regulations under BSA. Anti-money laundering programs in the USA come up with changes and scope will be extending with time.

Canada

FINTRAC, Financial Transactions and Reports Analysis Centre of Canada has recently released a final version of rules and regulations that depict amendments in the regulations to Proceeds of Crime (Money Laundering) and Terrorist Financing Act (PCMLTFA). The changes in Canada’s regulations ensure the compliance procedures and policies to be significantly stringent. 

According to these regulations, the financial services industry needs to be dynamic in nature for the reduction of money laundering and terrorist financing activities. Virtual currency services and digital payment methods have opened ways for fraudsters to transfer their embezzled funds across the world. Moreover, regulations extend to prepaid card issuers, virtual currency providers and foreign Money Services Businesses (MSBs).

Risks for Banks and Financial Institutions 

Money laundering and terrorist financing affect the overall economy of the world. As regulated entities such as banks and financial institutions are primary sources that deal with the money in a country. These entities are opportunities for fraudsters through which they can transfer ill-gotten gains in different corners of the world. There are several risks associated with the maintenance and supervision of banking relationships to which FIs and employees should be aware of. The interconnection of banks should be secure and well-organized to track the unusual flow of transactions. Otherwise, regulatory scrutiny can subject hefty penalties which include monetary fines, imprisonments, business abandonment temporarily or permanently and assets freezing. 

What are KYC and KYB?

Know Your Customer, KYC is a process of identity verification of the customers. It is part of Customer Due Diligence (CDD). To combat high-risk customers, identity verification plays an important role. KYC is the term most commonly used in banks and financial institutions for customer verification. Now, it is needed for almost all industries because of the extended scope of fraudulent tricks and region. 

Also, to comply with the obligations of global and local regulatory authorities, businesses need to verify their onboarding customers. To verify the credibility of customers, the KYC verification process makes sure that the person is actually who he says he is. Not only customers, but the scope of KYC extends to agents, businesses, corporate entities, and third-party verification. This is what we call ‘Know Your Business’ or KYB. 

KYB involves the verification of businesses your company is dealing with. This is important to verify that your business operations are running in association with honest and registered entities. To avoid regulatory fines, verification of Ultimate Beneficial Owners (UBOs) is declared mandatory. AML regulations of FATF have explicitly stated UBO screening importance for businesses to combat money laundering and terrorist financing. 

What is EDD?

Enhanced Due Diligence (EDD) includes additional information of customers as compared to the one collected during the CDD process. To combat the risks of high-risk customers in an organization, thorough screening is performed. In-depth verification of customers is conducted by verifying their identity, not only by collecting personal but also financial information. Following is the EDD information that is collected at verification time:

  • Business/ occupation
  • Financial status
  • Income
  • Location
  • Private/corresponding baking information
  • Continuous transaction monitoring, etc. 

Enhanced Due Diligence Factors

High-risk Customer Factors

 

  • Verification of customers if they are foreigners or non-residents
  • Personal vehicle information of legal identities
  • Verification of customers if their relatives or family members are in the list of PEPs
  • Businesses that are cash-intensive
  • Risk assessment of company against AML policies and parameters

Geography Risk Factors

 

  • Countries that lack AML/CFT practices and are prone to money laundering and terrorist financing. 
  • Countries that lie in sanctions lists or have high criminal records
  • Countries that are blacklisted for facilitating criminal activities
  • Countries that do not lie under the hood of FATF members, etc.

Importance of Watchlists and PEPs

Bad actors are spreading all around the world. Your business that is providing services across the globe should be well-aware of the policies and regulations under which businesses operate. Similarly, your businesses should know high-risk entities of friend countries. Updated records of criminals, money launderers, terrorist financiers, online fraudsters and hackers, and several other watchlists should be maintained issued by law enforcement agencies, to verify each onboarding customer and secure the organization.

In addition to this, identities should be verified against the list of PEPs and their relatives to make sure that no fraudulent identity is facilitated through your legitimate businesses. In case of any discrepancy, businesses can be subjected to inevitable heavy regulatory fines. Hence, it is a regulatory requirement as well as a security concern for the protection of business from malicious entities in the financial system.

Reporting Suspicious Transactions

In a financial system, any suspicious transactions should not be ignored. To prevent money laundering and terrorist financing activities, on an immediate basis, transactions should be reported. Under the requirements of regulatory authorities and anti-money laundering laws, reporting entities are supposed to submit Suspicious Transaction Reports (STRs). It should be reported regardless of the number of fraudulent transactions. A suspicious transaction is:

  • That appears unusual
  • Appears illegal
  • Transaction performed above the specified threshold
  • Frequent transactions from one identity
  • With no clear economic purpose
  • Shows indication of money laundering or terrorist financing

Discussed in the AMLA section, failure in reporting an STR is an offence which can be subjected to a regulatory fine.

Indications of Money Laundering

The features below are recorded in the money laundering case studies that came onto the surface after investigations:

Indications of Money Laundering

Conclusion

Anti-money laundering and countering or terrorist financing is the responsibility of every business and employee of a country. Strict regulatory requirements came into force as a result of its adverse effects od money laundering and terrorism financing on the global economy. Fraudsters that are violating the legitimacy of financial institutions should be tackled by all means. This very first step is the scrutinization of organizations against AML policies and procedures. The government can impose heavy criminal and civil penalties as a result of violations of regulatory obligations. 

Before the law, ignorance is not even an excuse.  

Facial Recognition to Reshape the Retail Industry in 2020

Facial Recognition to Reshape the Retail Industry in 2020

The explosion of facial recognition technology in our smartphones – for instance in iPhone X – is gradually accelerating the adoption of this biometric technology. The technology is not only going to reshape the business operations but will change the way we live and shop. Counterpoint Research predicts that more than 1 billion smartphones will be incorporating facial recognition technology. It means 64% of smartphones in 2020 will have this technology as compared to 5% in 2017.

Developments in Facial Recognition Technology

Facial recognition technology has been around for some years now. However, the advancements in artificial intelligence and machine learning algorithms have introduced new revolutions in this technology. Recent developments in technology are not limited to unlocking your phone – though people are loving the idea of password-less authentication. 

With exceptional speed and ease, this face-driven technology is enhancing the security making access to public places more convenient and personalized. Moreover, through facial recognition, the unique biometric token (face ID) empowers frictionless and personalized user experience – whether it’s an airport, hotel or a shopping mall. These evident user benefits are making this technology more acceptable in society. 

Lifting Public Security Through Technology

Security has always been a major concern for businesses – online and physical both. In brick-and-mortar retailing stores whenever people gather, there is always a security threat triggering security concerns not only in our minds but also for owners as well. The major challenge that businesses face is to balance security and customer experience. To achieve both at the same time, retailers need an efficient solution.

Surveillance cameras with integrated facial recognition technology are the best solution to address security concerns in public shops. The retailers can ensure that only authorized entities can have access to their services and facilities. Moreover, the presence of any suspicious or threatening person can be reported immediately – taking immediate action.

Moreover, the inclusion of facial recognition technology in consumers’ product is encouraging more people to accept this technology for surveillance. 

Prevent Theft and Eliminate Chargebacks

Theft seems like a retailer problem but it flows down to customers as well. The loss of theft is often passed down to customers in the form of higher prices because of a shortage of certain products. Seeing such trends and outcomes of the theft, retailers are using the facial recognition technology to guard themselves and customers against shoplifting crimes.  

Shoplifting and theft aren’t limited to physical retailing stores. With the explosion of e-commerce platforms, the fraudsters have found another opportunity to exploit the user accounts and make fraudulent purchases. These frauds result in chargebacks for retailers. To combat fraudulent chargebacks, online face verification is a proactive solution to authenticate authorized customers at the time of login and checkout as well.

Personalizing the Customer Experience

Face biometrics are the unique keys unlocking the contactless personalized customer experience in the retailing industry and will continue to open more doors in the near future. According to Markets and Markets Research, the facial recognition market is expected to grow $6.84 billion by 2021 as compared to $3.35 billion in 2016. 

From the retailers’ point of view, the real-time facial recognition solutions not only enhance the customer experience but also positively impact the bottom line. For example

  • Alerting the staff when some VIP customer enters the store
  • Enabling staff to offer personalized sales and services to their customers.
  • Enhancing customer service protocols, etc.

A single Face ID can take the customer experience to the next level. Whenever a customer enters the shop, the camera can scan a person’s face and alert the staff. It allows them to greet the customers by name personalizing their check-in experience. The best part about this technology is that it can be integrated with the existing systems, making it an easier and cost-effective solution.

Bringing online and offline experiences face to face

The eruption of online shopping and e-stores are enabling retailers to offer their customers real-time recommendations and suggestions. It is done automatically by the systems to gather the customer activities data of various online suggestions and predict customer behavior.

For instance, by gathering the customer’s geolocation, gender, age group, search history, and shopped products, the system can analyze the pattern and come up with the products that may be of user interest. Through the facial recognition system, the merchants can easily map their customers’ online and offline behavior. It allows them to provides customers with boundary-free shopping navigation.

For example, the retailers are coming up with an innovative shopping process that allows customers to order products – like groceries – online instead of doing it physically and collect it from the shop. The in-store facial recognition and verification cameras can alert the staff to compile the order as soon as the customer arrives. This saves shopper’s time and effort hence, enhancing the customer experience.

Express better understanding

The facial recognition software is advancing every day. Now it not only collects the person’s data – like name, demographic, age, gender, etc. – but also the facial expressions and how long a shopper takes to buy a particular product. Artificial intelligence and machine learning algorithms are making this technology smarter than ever that it can read expressions and know what shopper’s expression conveys. 

One of the retail giants, Wallmart, has already introduced the facial recognition tech that can capture the customer’s expressions standing in the checkout line to analyze the satisfaction or dissatisfaction level. Moreover, these findings and collected data enables the retailers to refine the in-store displays and real-time promotions that could attract customers.

Try Before You Buy

In brick-to-mortar stores, customers do have the opportunity of trying the products, e.g. clothes. But what about e-stores? To address this limitation businesses are actively using Facial recognition technology along with augmented and virtual reality to introduced exceptional virtual experience for their customers. Examples of such businesses include Amazon and Sephora.

Sephora Virtual Artist 

Sephora, one of the largest beauty retailers is quite famous for its immense customer experience. Stepping into the technological world, Sephora is also making active use of facial recognition technology and boosting its sales. Through augmented reality and real-time face verification, it is improving the customer shopping experience.

Sephora’s AR solution, also known as Sephora Virtual Artist, empowers customers to try makeup products virtually in real-time. Just by showing the face to camera, the customers can get the best 3D live experience. This technology is taking the shopping experience to a whole new level, eventually resulting in better sales.

Amazon’s Virtual Changing Room

Amazon, one of the renowned retailing giant has faced an increased clothing return rate because of online shopping. The reason was the customers weren’t satisfied with the color and style of the product. Taking into account this increasing trend, Amazon is looking out for new ways to provide its online customers with a “try-before-you-buy” experience.

With its virtual changing room app, Amazon is setting new shopping standards for customers. The facial recognition technology scans the customer’s face and body and by obtaining the user’s interest, choice and preferences, the app presents real-time recommendations and suggestions. Through the app, the customer can also virtually try the clothes and see how the dress will look like. 

It not only enhances customer experience but also leads to more precise, accurate and smarter purchase decisions.

The Way Forward

To sum up, facial recognition technology is playing an essential role in the retailing industry to enhance the customer shopping experience. As per Market Research Report, the global facial recognition market is expected to reach $8.93 billion by 2022, growing at a CGPR of 19.68%. This research gives us the insight that this technology is going to shape the retail industry beyond our imaginations.

Facebook Built a Facial Recognition App To Identify Employees

Facebook Built a Facial Recognition App To Identify Employees

Facebook is again under fire for its controversial facial recognition technology. The tool used facial recognition in real-time to identify people just by pointing a smartphone camera at them.  

Business Insider was the first one to break out the story and according to them, the app was in use between 2015 and 2016. It has since been discontinued. According to anonymous sources, the tool was able to pull up the Facebook profile of a person using facial recognition capabilities. The tool was allegedly tested on the employees of Facebook and their friends who had facial recognition activated on their profiles.  

Source: (Twitter)

Facebook confirmed the existence of the app but denied it could identify people on the social material. Facebook published a statement to CNET saying, 

“As a way to learn about new technologies, our teams regularly build apps to use internally. The app described here were only available to Facebook employees, and could only recognize employees and their friends who had face recognition enabled.”

The app was developed before the Cambridge Analytica scandal. Facebook has been the subject of multiple scandals over the last few years, and for this reason, the Delete Facebook movement continues to gain pace. Back in October, it was reported that Facebook was working on AI that could circumvent facial recognition and help fight deepfakes.

adobe twitter

Adobe, Twitter and NYT Team Up Against Deepfakes

Deepfakes, whether synthetic or manipulated, are rising at an alarming rate. They can cause immense damage as fake videos during the election period can cause false information to circulate. This can result in loss of lives and utter chaos. In order to combat image manipulation and deepfakes, Adobe has joined forces with Twitter and New York Times. 

During its Max conference in Los Angeles, Adobe introduced a new experimental feature that spontaneously let you know if an image is digitally manipulated. The feature also lets you undo the edits of the image. The tool, called ‘About Face’ allows you to upload an image and then it runs a detection algorithm to check if the image was tampered with or not.

 

Adobe, Twitter, and NYT

Adobe general counsel, Dana Rao, said about the deepfakes and transparency, 

“When it comes to the problem of deepfakes, we think the answer really is around ‘knowledge is power’ and transparency. We feel if we give people information about who and what to trust, we think they will have the ability to make good choices.”

About Face also lets you know the chances the image was manipulated. The tool doesn’t observe the image as a whole like a face detection algorithm, but rather looks at the individual pixels. Due to this, it also lets you know which parts of the image it thinks are exploited. The user is provided with a virtual heatmap of all the altered regions. About Face seems to be designed especially to detect changes made by Photoshop’s liquify tool. You can see where the pixels have been stretched, squished and interpolated. 

You can check out About Face in more detail at Adobe’s blog post here

Facebook New AI Can Help You Circumvent Facial Recognition

Facebook’s New AI Can Help You Circumvent Facial Recognition

Facial recognition technology is primarily used to detect and identify people but in a turn of events, Facebook has created a tool that fools the facial recognition technology to wrongly identify someone. Facebook’s Artificial Intelligence Research Team (FAIR) has developed an AI system that can “de-identify_ people in pre-recorded videos and live videos in real-time as well. 

This system uses an adversarial auto-encoder paired with a trained facial classifier. An auto-encoder is an artificial neural network that studies a description for a set of data unsupervised. Classifiers usually employ an algorithm to chart input data. The face classifier deals with data associated with facial images and videos. This slightly distorts a person’s face in such a way as to confuse facial recognition systems while maintaining a natural look that remains recognizable by humans. The AI doesn’t have to be retrained for different people or videos and there is only a little time distortion. 

According to Facebook, “Recent world events concerning advances in, and abuse of face recognition technology invoke the need to understand methods that deal with de-identification. Our contribution is the only one suitable for video, including live video, and presents a quality that far surpasses the literature methods.” 

Over the course of some years, deepfake videos have become common in which a person’s face can be edited into videos of other people. These deepfake videos have become so convincing and advanced that it can be difficult to tell the real ones from the fake ones.  

This de-identification program is built to protect people from such deepfake videos. 

In the paper, the researchers Oran Gafni, Lior Wolf, and Yaniv Taigman talked about the ethical concerns of facial recognition technology. Due to privacy threats and the misuse of facial data to create misleading videos, researchers have decided to focus on video de-identification. 

In principle, it works quite similarly to face-swap apps. This involves using a slightly warped computer-generated face through past images of them and then put on their real one. As a result, they look like themselves to a human but a computer cannot pick up vital bits of information which it could from a normal video or photo. You can watch this in action in this video here. 

According to the team, the software beat state-of-the-art facial recognition and was the first one to have done so on a video. It could also preserve the natural expressions of the person and was capable of working on a diverse variety of ethnicities and ages of both genders. 

Even though the software is extremely compelling, don’t expect this to reach Facebook anytime soon. Facebook told VentureBeat that there were no intentions to implement the research in its products. But with that said, the practical applications of the research are pretty clear. The software could be used to automatically thwart third parties using facial recognition technology to track people’s activity or create deepfakes. 

This research comes at such a time when Facebook is battling a $35 billion class-action lawsuit for alleged misuse of facial recognition data in Illinois. Facebook isn’t the only one working on de-identification technology. D-ID recently released a Smart Anonymization platform that allows clients to delete any personally identifying information from photos and videos. 

remarkable

Remarkable Progress in Facial Recognition Revealed by a New Study

Could you have imagined a few years ago that you could be identified from your brain scan? The idea seems striking and close to impossible but a new study has proven its possibility.  According to new research by the Mayo Clinic researchers, facial recognition has become so advanced that it can identify you by your MRI scan. The study utilized the facial recognition software to match the photos of volunteers with the medical scans of their brains. 

How was the Study Conducted? 

MRI (Magnetic Resonance Imaging) is an imaging procedure used to produce pictures of the body to observe the anatomy and physiological processes. The resulting image of an MRI of the brain usually includes an outline of the head, including skin and fat but not bone or hair. It is used to recognize conditions of the brain and spinal cord including aneurysms, strokes and other issues. MRI includes the entire head, including the patient’s face. While the appearance on an MRI scan is blurry, the imaging technology has grown to such a point that the face can be reconstructed from the scan. This face can then be matched to an individual with facial recognition software. 

This study was published in The New England Journal of Medicine on October 24, 2019. The number of volunteers used in the research was 84 and the study was conducted after noticing the high quality of images used to study the brains of patients with Alzheimer’s and dementia. 

After the participants agreed to volunteer in this research, a team led by Christopher Schwarz, a computer scientist at Mayo Clinic, photographed their faces. A computer program was separately used to reconstruct faces from the MRIs. The facial recognition software was then employed and the software correctly identified 70 of the subjects. The researchers employed commonly available facial recognition software for the study. The software was able to correctly match patients with their MRI images 83% of the time. 

The Privacy Threat is Real 

 

Although this was a simple test because of the small number of participants, this is still a worrisome advancement in technology. According to Schwarz, this development can pose a lot of risks to the patients who have had their MRIs like exposing family medical history, certain diseases and genetic information. The privacy threat is real according to Dr Michael Weiner of the University of California, San Francisco. 

Facial Recognition has reached new heights as evident in this study. Unfortunately, privacy laws are not advancing as fast as technology. With this study, it becomes apparent that the medical records of the patients can easily identify them. Facial recognition technology is also being used by the police for surveillance. Banks and financial institutes use facial recognition for security and verification measures. This wide range of usage warrants the need for privacy laws to make sure the technology is used for mankind, not against it. 

deepfake hi vs ai

Facial Recognition: Worries About the Use of Synthetic Media

In 2019, 4.4 billion internet users were connected to the internet worldwide, a rise of 9% from last year recorded by Global Digital 2019 report. As the world shrinks to the size of a digital screen in your palm, the relevance of AI-backed technologies can hardly be overstated. Mobile applications took over marketplaces; cloud storage replaced libraries, and facial recognition systems became the new ID. 

On the flip side, this has also exposed each one of us to a special kind of threat that is as intangible as its software of origin: the inexplicable loss of privacy. 

AI-powered surveillance, in the form of digital imprints, is a worrying phenomenon that is fast taking center stage in technology conversations. Facial recognition is now closely followed by facial replacement systems that are capable of thwarting the very basis of privacy and public anonymity. Synthetic media, in the form of digitally altered audios, videos, and images, are known to have impacted many in recent times. As the largest threat to online audiovisual content, deepfakes are going viral, with more than 10,000 videos recorded to date. 

As inescapable as facial technology seems, researchers have found a way to knock it down using adversarial patterns and de-identification software. However, the onus falls on the enablers of technology who must now outpace the rate at which preparators are learning to abuse facial recognition for their own interests. 

Trending Facial Recognition Practices 

Your face is your identity. Technically speaking, that has never been truer than it is today. 

Social media, healthcare, retail & marketing, and law enforcement agencies are amongst the leading users of facial recognition databases that stock countless images of individuals for various reasons. These images are retrieved from surveillance cameras embedded with the technology, and from digital profiles that can be accessed for security and identification purposes. 

As a highly controversial technology, facial recognition is now being subjected to strict regulation. Facebook, the multi-billion dollar social media giant, has been penalized for its facial recognition practices several times by legal authorities. Privacy Acts accuse it of misusing public data and disapprove of its data collection policies.

In popular use is Facebook’s Tag Suggestions feature using biometric data (facial scanning) to detect users’ friends in a photo. Meddling with the private affairs and interests of individual Facebook users, the face template developed using this technology is stored and reused by the server several times, mostly without consent. While users have the option to turn off face scanners at any time, the uncontrolled use of the feature exposes them to a wide range of associated threats. 

Cautions in Facial Replacement Technology

 

As advanced as technology may be, it has its limitations. In most cases, the accuracy of identification arises as a leading concern among critics, who point to the possibility of wrongly identifying suspects. This is especially true in the case of people of color, as the US government has found them to be wrongly identified by the best facial algorithms five to ten times higher than whites. 

For instance, a facial recognition software, when fed with a single photo of a suspect, can match up to 50 photos from the FBI database, leaving the final decision up to human officials. In most cases, image sources are not properly vetted, further dampening the accuracy of the technology underuse. 

De-identification Systems

 

Businesses are rapidly integrating facial recognition systems for identity authentication and customer onboarding. But while the technology itself is experiencing rampant adoption, experts are also finding a way to trick it. 

De-identification systems, as the name suggests, seek to mislead facial recognition software and trick it into wrongly identifying a subject. It does so by changing vital facial features of a still picture and feeding the flawed information to the system. 

As a step forward, Facebook’s AI research firm FAIR claims to have achieved a new milestone by using the same face replacement technology for a live video. According to them, this de-identification technology was born to deter the rising abuse of facial surveillance. 

Adversarial Examples and Deepfakes

 

Facial recognition fooling imagery in the form of adversarial examples also have the ability to fool computer vision systems. Wearable gear such as sunglasses has adversarial patterns that trick the software into identifying faces as someone else, as found by researchers at Carnegie Mellon University. 

A group of engineers from the University of KU Leuven in Belgium has attempted to fool AI algorithms built to recognize faces, simply by using some printed patterns. Printed patches on clothing can effectively make someone virtually invisible for surveillance cameras.

Currently, these experiments are limited to specific facial software and databases, but as adversarial networks advance, the technology and expertise will not be limited to a few hands. In the current regulatory scenario, it is hard to say who will win the race: the good guys who will use facial recognition systems to identify criminals or the bad guys who will catch on to the trend of de-identification and use it to fool even the best of technology? 

AI researchers of the Deepfake Research Team at Stanford University have delved deeper into the rising trend of synthetic media and found existing techniques such as erasing objects from videos, generating artificial voices, and mirroring body movements, to create deepfakes. 

This exposure to synthetic media will change the way we perceive news entirely. Using artificial intelligence to deceive audiences is now a commonly learned skill. Face swapping, digital superimposition of faces on different bodies, and mimicking the way people move and speak can have wide-ranging implications. The use of deepfake technology has been seen in false pornography videos, political smear campaigns and fake news scares, all of which have damaged the reputation and social stability. 

 

Deepfakes

Humans Ace AI in Detecting Synthetic Media

 

The unprecedented scope of facial recognition has opened up a myriad of problems. Technology alone can’t win this war. 

Why Machines Fail 

 

Automated software can fail to detect a person entirely, or display improper results because of tweaked patterns in a deepfake video. Essentially, this happens because the machines and software understand faces can be exploited.

Deep learning mechanisms, that power facial recognition technology, extract information from large databases and look for recurring patterns in order to learn to identify a person. This entails measuring scores of data points on a single face image, such as calculating distance between pupils, to reach a conclusion.

Cybercriminals and fraudsters can exploit this weakness by blinding facial recognition software to their identity without having to wear a mask, thereby escaping any consequence whatsoever. Virtually anything and everything that uses AI solutions to carry out tasks are now at risk, as robots designed to do a specific job can easily be misled into making the wrong decision. Self-driving cars, bank identification systems, medial AI vision systems, and the likes are all at serious risk of being misused. 

Human Intelligence for Better Judgement

 

Currently, there is no tool available for accurate detection of deepfakes. As opposed to an algorithm, it is easier for humans to be prepared to detect altered content online and be able to stop it from spreading.  An AI arms race coupled with human expertise will discern which technological solutions can keep up with such malicious attempts. The latest detection techniques will, therefore, need to include a combination of artificial and human intelligence. 

By this measure, artificial intelligence reveals undeniable flaws that stem from the abstract analysis that it relies on. In comparison, human comprehension surpasses its digital counterpart and identifies more than just pixels on a face. 

As a consequence, the use of hybrid technologies, offered by leading identification software tackles this issue with great success. Wherever artificially learned algorithms fail, humans can promptly identify a face and perform valid authentications. 

In order to combat digital crimes and secure AI technologies, we will have to awaken the detective in us. Being able to tell a fake video from a real one will take real judgment and intuitive skills, but not without the right training. Currently, we are not equipped to judge audiovisual content, but we can learn how to detect doctored media and verify content based on source, consistency, confirmation, and metadata. 

However, as noticed by privacy evangelists and lawmakers alike, the necessary safeguards are not built into these systems. And we have a long way to go before relying on machines for our safety. 

 

Facial Recognition Burgeoning Threat to Privacy

Facial Recognition: Burgeoning Threat to Privacy

The expanding use of facial recognition technology for ID verification, user authentication, and accessibility is finally coming under fire from privacy evangelists worldwide. Proponents of digital privacy are talking about user consent, data context, transparency in data collection, data security, and lastly accountability. Adherence to strict principles of privacy, as well as free speech, entails proper regulation aimed at controlled use of facial technology. 

Facial scanning systems are used for a variety of purposes: facial detection, facial characterization, and facial recognition. As a major pillar of digital identity verification, facial authentication serves as a means of confirming an individual’s identity, and stores critical user data in the process. The technology is keeping the trade-up by allowing users broader use of digital platforms and enhanced knowledge of data collection.

The Digital ID Market: A Snapshot

Digital identity verification is changing the way companies are working. In Europe alone, the expected growth of the identity verification market is found to be 13.3% from 2018 to 2027. By then, the market will have grown to US$4.4 billion. By the year 2030, the McKinsey Global Institute puts value addition by digital identification at 3 to 13 percent of GDP for countries implementing it.

 

The Digital ID Market: A Snapshot

 

At the same time, cybersecurity threats are also on the rise, indicating a glaring need for enhanced security solutions for enterprises. According to Juniper, cybercrimes have cost $2 trillion in losses in 2019 alone. By 2021, Forbes predicts this amount will triple as more and more people find ways to mask identities and engage in illicit activities online. 

As a direct consequence of this, the cybersecurity market is also expected to grow to a humongous $300 billion industry, as apprehended in a press release by Global Market Insights. 

As technological advancement fast-tracks, this figure will probably grow in proportion to the growing threats to cyberspace, both for individuals and enterprises. 

Facial Recognition Data Risks

 

Formidable forces tug at the digital user from both ends of the digital spectrum. Biometric data, while allowing consumers to avail a wide range of digital services without much friction, also continue to pose serious risks that they may or may not be aware of. 

Facial recognition data, if misused, can lead to the risks that consumers are generally unaware of, for instance,

  1. Facial spoofs
  2. Diminished freedom of speech 
  3. Misidentification 
  4. Illegal profiling

Much has been said about the use of facial recognition technology in surveillance by law enforcement agencies. At airports, public events and even schools, facial profiling has led to serious invasion of privacy that is increasingly gaining public traction. While most users are happy to use services like face tagging and fingerprint scanning on their smartphones, privacy activists are springing into action with rising knowledge and reporting of data breaches.

Let’s dig deeper into one of the most potent cybersecurity threats linked to facial recognition technology: Deepfake. 

How Deepfakes Impact Cybersecurity

 

In the world of digital security, deepfakes are posing a brand new threat to industries at large. To date, there are 14,678 deepfake videos on the internet. As barriers to the use of AI are lowered, adversaries share the same access to advanced technological capabilities as regulators. High rates of phishing attacks are targeting financial institutions, service providers and digital businesses alike. Representation of enterprises is at risk as deepfakes are fully capable of altering videos and audio without being detected. 

This has profound security implications for identity verification processes based on biometrics, which will find it harder to identify the true presence of a customer. 

With the pervasive use of evolving technology, cybercriminals will find it easier to access sophisticated tools and nearly anyone can create deepfakes of people and brands. This involves higher rates of identity threats, cyber frauds and running smear campaigns against public personalities and reputable brands. 

For facial identification software, this means fake positives created by deepfake technology can assist cyber criminals in impersonating virtually anyone on the database. Cybersecurity experts are rushing to integrate better technological solutions such as audio and video detection, in order to mitigate the impact of deepfake crimes. More subtle features of a person’s face will be recorded in order to detect impersonators. 

However, it is impossible to turn a blind eye to the raging speed at which the use of generative adversarial networks is making deepfakes harder to detect. According to experts, the underlying AI technology that supports the proliferation of such impersonation crimes is what will fuel more cyber attacks. 

Blockchain technology might also help in authenticating videos. Again, the success of this solution also depends on validating the source of the material, without which any individual or enterprise is at high risk of being maligned. 

Implications Across Users

 

Gartner warns enterprises about the use of biometric approaches to identity verification, as spoof attacks continue to riddle the digital security landscape. While popular celebrities can be exploited by incorrectly using their facial identity in pictures and videos, large corporations are also at high risk of being targeted.

Sensational announcements about the company or industry trends can lead to stock scares and other financial repercussions. Fake news and misinformation have the potential to cause meltdowns in political landscapes. Additionally, doctored videos on social media can cause an uproar among certain demographics, leading to social unrest. 

Identity Verification Technology – A win-win approach

 

With more and more companies using digital onboarding solutions, the threat of deepfakes is real and must be effectively countered. Companies are no longer looking only for identity solutions that make the best use of customer biometrics. Instead, they now have an increasing interest in how the stored information is safeguarded against burgeoning cyber threats. 

The first step in resolving digital impersonation crimes is to be fully aware of the possibilities as such. Enterprises and professionals need to be apprised of the rising misuse of digital verification software, and the likelihood of personal data being compromised. 

Face swapping technologies must now be matched with face detection software that helps identity fake videos and content that misleads. In addition, digital security solutions must be ramped up, especially those involving the use of sensitive client data. 

Biometric authentication and liveness detection solutions

 

Liveness detection, as an added feature of facial recognition, provides an efficient solution to deepfakes as fraudulent attempts at using past photos/videos to bypassing biometric identification increase. The same technology behind deepfakes can also be employed to counter frauds and spoof attacks, to ensure that personal data is not compromised for cybercrime. 

Differentiating between spoofs and real users became easier as additional layers of security are added to the verification process. Users are required to appear in front of a camera and capture a selfie or a live video. 

Shufti Pro performs biometric analysis to validate true customer presence, with markers that check for eyes, hair, age, and color texture differences. Coupled with microexpressions analysis, 3D depth perception and human face attributes analysis, this ID verification process ensures maximum protection against digital impersonators. 

More on Liveness Detection as an AntiSpoof measure here
More posts