Understanding the Limitations of Machine-Based Decisions for Online Identity Verification

Is AI Enough?

Accelerating digital transformation and the growing virtual banking market are driving increasing adoption of artificial intelligence (AI) technologies in the financial services sector to streamline operational workflows and better serve a broader set of customers.

A global study by PwC shows that 52% of the financial services industry is making substantial investments in AI. In Asia, a recent study conducted by The Hong Kong Monetary Authority (HKMA) reveals that almost 90% of the retail banks in Hong Kong are adopting or planning to adopt AI applications for customer-facing services and for optimising internal operational workflows.

While AI can help deliver greater efficiencies and better customer experiences, there are still challenges and other factors that financial organizations need to keep in mind when implementing AI and machine learning. A few challenges faced by the banks include the lack of credible and quality data, unavailability of people with the right skill sets to design and develop the AI applications, and a lack of transparency and accountability of the delivered outcome.

Regulators are playing a more proactive role in setting a governance framework to ensure accountability, explainability and auditability of the AI applications used within the FIs to mitigate risks of AI-driven bias in their business decision making.

The HKMA, for example, issued guidance to the banking industry on the development and use of AI applications. Although the guidelines are set out for the banking industry in Hong Kong, these principles in general could serve as a reference framework for the monetary authorities/regulators in other countries.

The HKMA guidance listed 12 high-level principles around three key machine learning themes:

Governance

Bank boards and senior management should be accountable for all AI-driven decisions and should ensure that proper governance framework and risk management measures are put in place to oversee the use of AI applications within their institutions.

Application design and development

Banks should implement adequate measures and build in sufficient audit logs during the design phase of AI applications to track and ensure sufficient level of explainability for the outcomes of their AI applications — relying on black box excuses is not acceptable. To achieve this, banks should ensure that their developers have the requisite competence and experience, not only to develop the models but to understand the interplay between their algorithms and regulatory compliance.

Another point to note is the quality and reliability of the data used for AI models. As the quality of data used to train the AI models will affect the accuracy and performance of AI applications, banks should implement an effective data governance framework to ensure that the data used are of good quality and there was no bias built into the AI-driven decision. Bias can creep into AI models in several ways. Many AI models start with public data sets but many of these data sets are incomplete (data does not reflect the real world) or have been influenced by human bias. If left undiscovered, bias in these data sets could affect the fairness and accuracy of the AI decision-making.

Ongoing monitoring and maintenance

AI applications continuously learn from live data and their model behaviour may therefore change after deployment. Periodic reviews and re-validation of the AI applications and any related services implemented by third-party vendors should be conducted to ensure the accuracy and appropriateness of the AI models.

Considering the data-intensive nature of AI applications and the exposure of these data to new cybersecurity threats, banks should also ensure compliance with the data protection regulations across jurisdictions that they operate in and ensure that all PII is properly encrypted in transit and at rest (within the data centers).

Using Informed AI for Better eKYC

There are many areas where financial services are utilising AI to streamline the processes. AI is increasingly being integrated into identity proofing technologies (used during online customer onboarding) for fraud detection, AML and KYC compliance and risk scoring.

AI has enabled financial services to leverage big data and machine learning optimally to automate ID verification tasks and deliver more streamlined digital onboarding experiences. Based on the current state of AI, banks should be careful to deploy fully automated identity verification solutions when assessing the digital identities of new users is crucial. Failing to accurately verify the digital identity of a user right from the start can often have serious repercussions on the subsequent eKYC processes.

In these use cases, a hybrid approach to online identity verification that leverages informed AI is recommended. No doubt AI can drive operational efficiency and perform identity verification at a relatively high level of accuracy, but we see it as complementing and not replacing humans entirely. AI is driven by machine learning that makes decisions based on probabilities, not absolutes. Human agents can provide feedback to the algorithms based on their knowledge of which outcomes were false positives or false negatives to help continuously refine the AI models.

Some of the inherent limitations of fully automated (solely AI-driven) solutions occur when environmental “noise” (e.g., blur, dim/poor lighting, excessive glare) makes it difficult to read from an ID document. The benefit of adding humans in the loop to the process is that human agents can visually inspect the ID document and have the ability to contend with the environmental factors to make better decisions, assisted by AI.

The other benefit of humans in the loop is that human agents can specify rejection reasons at a more granular level (e.g., the ID document was too blurry or a finger was obscuring parts of the ID document).

The accuracy and completeness of data fed to train the AI algorithms also influences the machine-based decisions. Many companies and vendors will use off-the-shelf data sets to train their AI algorithms when setting up the eKYC system. The problem with this approach is that the data sets are not real-world production data and in many cases, the IDs have been improperly tagged which introduces some automatic bias into any AI models derived from these data sets. Plus, these data sets are often too small to effectively train the models. For example, if the data set only contains 11 Nigerian driver’s licenses, that’s not sufficient to properly train the models for that particular ID type.

Jumio offers eKYC-as-a-Service powered by informed AI

As the leading AI-powered trusted identity as a service provider, Jumio provides bank-grade security within its online identity verification platform to help banks and financial institutions worldwide fulfill stringent compliance requirements for identity verification, KYC/AML and data privacy.

Provide the required expertise to our banking customers

Operating a modern eKYC system requires specific skill sets including online identity fraud expertise, ID manipulation, data reference, designing the user experience workflow and technical knowledge of tuning AI systems. In many cases when the bank’s business expands beyond a single market, handling the verification of foreign ID documents exponentially increases the complexity, costs and resources required. Jumio offers full suite eKYC services from identity proofing and fraud checks to AML screening, which has helped greatly reduce our banking customers’ workload and need for internal resources.

Ensure accountability and explainability of AI-driven decisions

Jumio offers a hybrid eKYC service model option that combines informed AI with verification experts — humans in the loop — who effectively check the work of the AI algorithms to deliver a definitive verification outcome of “APPROVED VERIFIED” or “DENIED.”

We also provide specific rejection codes explaining why a verification was not approved as well as standard reporting of every verification transaction to support audits and explainability of the AI-driven outcomes.

Ensure data quality and relevancy

Unlike many vendors in our space, Jumio only leverages real-world production data to build its algorithms. Jumio has processed over 225 million verifications comprising more than 3,500 types of ID documents from over 200 countries and territories and this gives us a big leg up in developing smarter algorithms. All tagging is being performed by trained, seasoned identity verification specialists. In tagging every identity verification, we’ve been training our AI in real-time and improving the quality and accuracy of our AI algorithms.

Ensure data protection compliance

Jumio has been engaged by many of the leading banks and financial institutions as an outsourced partner to fulfill a significant portion of their digital onboarding process. Jumio is annually audited and certified to PCI DSS Level 1 standards to ensure the security and encryption of PII data that we collected during our online identity verification process.

Periodic reviews and audits of AI applications

Companies are encouraged to review the performance of their AI models on an ongoing basis to ensure and improve system accuracy. Jumio conducts monthly audits of our verification transactions, involving human oversight in the process to ensure applications continue to perform as intended, thus enabling our banking customers to stay compliant with ongoing monitoring requirements.

Contact us to learn how Jumio can help regulated financial services fulfill regulatory guidelines and standards when designing their eKYC processes.

email

Get the latest updates from the Identity and Beyond blog, delivered to your inbox.

    Yes, I would like to receive periodic updates from the Jumio blog as well as marketing communications regarding Jumio products, services, and events. I can unsubscribe at any time.

    Jumio values your privacy. To learn more, visit our Privacy Statement.