Frame-2085666935

Blind Spot Audit

Secure fraud your IDV already approved.

Runs On Your CloudRuns On Your Cloud

No Data SharingNo Data Sharing

No Contract RequiredNo Contract Required

Frame-2085666935

Deepfake Detection

Check where deepfake IDs slipped
through your stack.

Runs On Your CloudRuns On Your Cloud

No Data SharingNo Data Sharing

No Contract RequiredNo Contract Required

Frame-2085666935

Liveness Detection

Find the replay gaps in your passed
liveness checks.

Runs On Your CloudRuns On Your Cloud

No Data SharingNo Data Sharing

No Contract RequiredNo Contract Required

Frame-2085666935

Document Deepfake Detection

Spot synthetic documents hiding in
verified users.

Runs On Your CloudRuns On Your Cloud

No Data SharingNo Data Sharing

No Contract RequiredNo Contract Required

Frame-2085666935

Document Originality Detection

Stop fake documents before they pass.

Runs On Your CloudRuns On Your Cloud

No Data SharingNo Data Sharing

No Contract RequiredNo Contract Required

.

Introducing Blind Spot Audit. Spot AI-generated forgeries with advanced document analysis. Teg-1 Run Now on AWS right-arrow-2

Introducing Blind Spot Audit Teg-1

Spot AI-generated forgeries with advanced document analysis.

Run Now right-arrow-2
  • .

    Introducing Deepfake Detetction. Detect deepfakes with precision your stack has missed. Teg-1 Run Now on AWS right-arrow-2

    Introducing Deepfake DetetctionTeg-1

    Detect deepfakes with precision your stack has missed.

    Run Now right-arrow-2
  • .

    Introducing Liveness Detection. Detect spoofs with technology built for sophisticated fraud. Teg-1 Run Now on AWS right-arrow-2

    Introducing Liveness DetectionTeg-1

    Detect spoofs with technology built for sophisticated fraud.

    Run Now right-arrow-2
  • .

    Introducing Document Deepfake Detection. Spot AI-generated forgeries with advanced document analysis. Teg-1 Run Now on AWS right-arrow-2

    Introducing Document Deepfake DetectionTeg-1

    Spot AI-generated forgeries with advanced document analysis.

    Run Now right-arrow-2
  • .

    Introducing Document Originality Detection. Verify document authenticity before your next audit. Teg-1 Run Now on AWS right-arrow-2

    Introducing Document Originality DetectionTeg-1

    Verify document authenticity before your next audit.

    Run Now right-arrow-2
  • us

    216.73.216.219

    New Study by NIST Reveals Biases in Facial Recognition Technology

    new study by nist reveals

    The National Institute of Standards and Technology (NIST) recently did a study on the effects of race, age and sex on the facial recognition software. The results showed significant biases in the software for the people of color and women. 

    The report’s primary author and a NIST computer scientist, Patrick Grother, said in a statement, 

    “While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied.”  

    This means that the algorithms misidentified people of color more than white people and they also misidentified women more than men. 

    The study done by NIST is pretty robust. NIST evaluated 189 software algorithms from 99 developers, “a majority of the industry”, using federal government data sets which contains roughly 18 million images of more than 8 million people.

    The study evaluated how well those algorithms perform in both one-to-one matches and one-to-many matches. Some algorithms were more accurate than others, and NIST carefully notes that “different algorithms perform differently.” 

    The higher rates of false positives were in the one-to-one matching scenario for Asian and African American faces compared to Caucasian faces. And this effect was remarkably dramatic as well in some instances with the misidentifications 100 times more for the Asian and African American faces compared to their white counterparts. 

    The study also showed that the algorithms resulted in more false positives for women than men and more false positives for the very young and very old compared to the middle-aged faces. In the one-to-many scenario, African American women had higher rates of false positives. 

    The study also found out that the algorithms developed in Asian countries didn’t result in the same drastic false-positive results in the one-to-one matching of Asian and Caucasian faces. According to NIST, this shows that the impact of the diversity of training data, or lack thereof, on the resulting algorithm. 

    “These results are an encouraging sign that more diverse training data may produce more equitable outcomes, should it be possible for developers to use such data,” Grother said.

     

     

    Related Posts

    News

    Meta Blocks 544,000+ Accounts Under Australia’s Social Media Ban

    Meta Blocks 544,000+ Accounts Under Australia’s Social Media Ban

    Explore More

    News

    Ireland Calls for Compulsory ID Verification on Social Platforms Across the EU

    Ireland Calls for Compulsory ID Verification on Social Platforms Across the EU

    Explore More

    News

    France Targets Under-15 Social Media Use With Mandatory Age Verification For 2026

    France Targets Under-15 Social Media Use With Mandatory Age Verification For 2026

    Explore More

    News

    Malaysia Aims for 95% Public Service Integration With MyDigital ID By 2030

    Malaysia Aims for 95% Public Service Integration With MyDigital ID By 2030

    Explore More

    News

    Germany’s eID Under Scrutiny as EU Digital Identity Wallet Deadline Nears

    Germany’s eID Under Scrutiny as EU Digital Identity Wallet Deadline Nears

    Explore More

    News

    Federal Judge Blocks Louisiana Social Media Age Verification Law Ahead of Enforcement

    Federal Judge Blocks Louisiana Social Media Age Verification Law Ahead of Enforcement

    Explore More

    News

    Department of Education Says New ID Checks Blocked $1B in Student Aid Fraud Linked to “Ghost Students”

    Department of Education Says New ID Checks Blocked $1B in Student Aid Fraud Linked to “Ghost Students”

    Explore More

    News

    Meta Blocks 544,000+ Accounts Under Australia’s Social Media Ban

    Meta Blocks 544,000+ Accounts Under Australia’s Social Media Ban

    Explore More

    News

    Ireland Calls for Compulsory ID Verification on Social Platforms Across the EU

    Ireland Calls for Compulsory ID Verification on Social Platforms Across the EU

    Explore More

    News

    France Targets Under-15 Social Media Use With Mandatory Age Verification For 2026

    France Targets Under-15 Social Media Use With Mandatory Age Verification For 2026

    Explore More

    News

    Malaysia Aims for 95% Public Service Integration With MyDigital ID By 2030

    Malaysia Aims for 95% Public Service Integration With MyDigital ID By 2030

    Explore More

    News

    Germany’s eID Under Scrutiny as EU Digital Identity Wallet Deadline Nears

    Germany’s eID Under Scrutiny as EU Digital Identity Wallet Deadline Nears

    Explore More

    News

    Federal Judge Blocks Louisiana Social Media Age Verification Law Ahead of Enforcement

    Federal Judge Blocks Louisiana Social Media Age Verification Law Ahead of Enforcement

    Explore More

    News

    Department of Education Says New ID Checks Blocked $1B in Student Aid Fraud Linked to “Ghost Students”

    Department of Education Says New ID Checks Blocked $1B in Student Aid Fraud Linked to “Ghost Students”

    Explore More

    Take the next steps to better security.

    Contact us

    Get in touch with our experts. We'll help you find the perfect solution for your compliance and security needs.

    Contact us

    Request demo

    Get free access to our platform and try our products today.

    Get started
    Shufti-tick

    Thanks For Your Submission.

      close-form

      Run Document Deepfake on AWS

      Stop synthetic IDs and forged scans

      Enter your work email*