Jumio 2024

Online Identity Study

Global Consumer Research

About the Research

down arrow

Now in its third year, the annual Jumio Online Identity Consumer Study explores consumer awareness and sentiment around issues involving online identity, fraud risks, and current methods used to protect consumer identity data.

This year’s results highlight significant concerns among consumers about the risks associated with generative AI and deepfakes, including the potential for increased cybercrime and identity fraud. The study demonstrates the pressing need to ensure that users are genuine.

Total Respondents:
8,077 adult consumers

Sectors Studied:

image of sectors studied chart. Chart lists: Financial Services Government Healthcare Social Media Sharing Economy Travel/Hospitality Retail/ecommerce Telecoms Mobility Services Online Gaming and Gambling

Countries Studied:

image of pinpoints on world map. Points shown left to right: Mexico, U.S., UK and Singapore. Text image reads: Countries Studied: split evenly

72% of consumers worry daily about being fooled by a deepfake, and they want their government to do more to regulate AI.

Three-quarters of consumers worry daily about being fooled by a deepfake into handing over sensitive information or money.

Consumers who worry about deepfakes on a daily basis:

image of two women: Woman on the left is soft smiling with long curly brown hair wearing a white button up shirt. Woman on the right is soft smiling with medium length blonde hair. She is wearing a blazer with a white button up shirt and earrings.

Only 15%* of consumers said they’ve never encountered a deepfake, while 60% have encountered a deepfake within the past year and 22%* are unsure.


A significant majority of consumers call for more governmental regulation of AI to address the issues around deepfakes and generative AI. However, regulatory trust varies globally.

I think my government’s laws around AI don’t go far enough:
I have faith in my government’s ability to regulate AI:

Consumers continue to overestimate their own ability to spot deepfakes.

Even with high anxiety around this increasingly prevalent and ever-evolving technology, consumers continue to overestimate their own ability to spot deepfakes ​​— 60% believe they could detect a deepfake, up from 52% in 2023.

Consumers confident in their ability to spot a deepfake:


Men were more confident in their ability to spot a deepfake (66% men versus 55% women).

Men aged 18-34
were most confident (75%)

Women aged 35-54
were least confident (52%)


Stronger identity verification is needed to protect against costly and prevalent identity theft and fraud.



Fraud is an all-too-familiar issue for many consumers across the globe, with 68%* of respondents reporting that they know or suspect that they’ve been a victim of online fraud or identity theft, or that they know someone who has been affected.


  • iconOne-third (32%*) of consumers who were or suspected they were a victim of online fraud said it caused significant problems and several hours of administrative work to resolve, and 14%* went as far as calling it a traumatic experience.
  • iconMore than 70% of consumers said they’d spend more time on identity verification if those measures improved security in industries including financial services (77%), healthcare (74%), government (72%), retail and ecommerce (72%), social media (71%), the sharing economy (71%), and travel and hospitality (71%).
  • iconWhen creating a new online account, global consumers said taking a picture of their ID and a live selfie would be the most accurate form of identity verification (21%*), with creating a secure password coming in at a close second (19%*).

Regardless of whether they’ve been a victim of fraud or identity theft, most consumers worry daily about falling victim to data breaches and account takeover attacks.

Consumers who worry daily about online data breaches:
Consumers who worry daily about their account being taken over by a hacker:


*All data points presented on this page reflect net figures unless indicated with an asterisk (*).