AI Is the Final Blow for an ID System Whose Time Has Passed

Last month, the world got a preview of a looming catastrophe: the use of artificial intelligence (AI) to bypass antiquated identity and security systems. The news outlet 404 Media reported the discovery of an “underground” service called OnlyFake that created and sold fake IDs for 26 countries through Telegram, and one of 404’s reporters used one of OnlyFake’s IDs to bypass the “KYC,” or “know your customer,” process of crypto exchange OKX.

There’s nothing terribly new there, except that OnlyFake claims it uses AI to create the bogus documents. 404 wasn’t able to confirm OnlyFake’s claim to use AI, but OnlyFake’s deep-discount pricing may suggest the claims are real.

Either way, this should be a wake-up call: It’s only a question of when, not if, AI tools will be used at scale to bypass identity controls online.

New AI-Enabled Tools for Online Identity Fraud

The scariest thing about AI-generated fake IDs is how quickly and cheaply they can be produced. The OnlyFake team was reportedly selling AI-generated fake driver’s licenses and passports for $15, claiming they could produce hundreds of IDs simultaneously from Excel data, totaling up to 20,000 fakes per day.

A flood of cheap, convincing fake physical IDs would leave bars, smoke shops and liquor stores inundated with fake-wielding teenagers. But there would be some chance at detection, thanks to anti-fraud features, like holograms, UV images, and microtext, now common on physical ID cards.

But OnlyFakes’ products are tailored for use online, making them even more dangerous. When a physical ID is used online, the holograms and other physical anti-fraud measures are rendered useless. OnlyFakes even generates fake backdrops to make the images look like photos of IDs snapped with a cell phone.

One tentative method of making online identity more secure is video verification, but new technologies like OpenAI’s Sora are already undermining that method. They’re frighteningly effective in one-on-one situations, such as when a finance staffer was tricked out of $25 million by ‘deepfake’ versions of their own colleagues.

With critical services moving online en masse, digital fraud is becoming even more professionalized and widespread than the offline version.

The Numbers Don’t Add Up, But They Don’t Have To

You might wonder how those generative fakes work without real driver’s licenses or passport numbers. If you submit an AI-generated driver’s license number for verification at a crypto exchange or other financial institution, the identity database would immediately flag it as a fake, right?

Well, not exactly. Police or other state entities can almost always directly access ID records, but those systems don’t give third parties easy access to their database—partly out of privacy concerns. Therefore, many verification systems simply can't ask the issuing agency if a driver’s license or ID is valid, hence why 404 Media was able to use an AI-generated card to fool OKX.

A KYC provider might instead rely on third-party data brokers for valid matches or pattern-based alphanumeric verification—in other words, determining whether or not an ID number is valid by whether it matches a certain pattern of letters and numbers used by issuers.

This would make such systems particularly vulnerable to AI fakes since detecting and reproducing patterns is where generative AI shines.

The OnlyFakes instance is just one example of a growing fraud problem that exploits flaws in our identity systems. The U.S. estimated losses between $100-$135 billion in pandemic unemployment insurance fraud, which is often perpetuated by false identities. Even scarier, there has been a rise in fake doctors, whether selling fake treatments online or practicing in American hospitals, enabled by identity fraud.

We can do better.

How Do We Fight AI Identity Fraud?

It’s clearly time to develop a new kind of identification credential—a digital ID built for the internet, and resistant to AI mimicry. An array of formats and standards are currently being adopted for this new kind of digital ID, such as mDLs (mobile driver’s licenses) and digital verifiable credentials.

At the core of these digital credentials are counterparts to the holograms and other measures that let a bartender verify your physical ID. That includes cryptographic security schemes, similar to what the White House is supposedly considering to distinguish official statements from deepfakes. These cryptographic attestations use unique codes as long as 10 to the 77th power, making it computationally impossible for an AI to mimic.

However, new approaches are not without new risks. While digital ID systems may promise to prevent fraud and provide convenience, they must be implemented carefully to enshrine privacy and security throughout our infrastructure. When implemented without consideration for the necessary policy and data protection frameworks, they may introduce challenges such as surveillance, unwanted storage of personal information, reduced accessibility or even increased fraud.

Fortunately, many mitigations exist. For example, these digital IDs can be bound to physical devices using security chips known as secure elements, which can add the requirement for the same device to be present when they are being used. This makes digital IDs much harder to steal than just copying a file or leaking a digital key from your cloud storage. The technology can also be paired with privacy and accessibility laws to ensure safe and simple usage.

This new kind of ID makes it easier for the user to choose what data they reveal. Imagine taking a printed ID into a liquor store and being able to verify to the clerk that you’re over 21—without sharing your address or even your specific birthday. That flexibility alone would greatly increase privacy and security for individuals and society as a whole.

Privacy-preserving technology would also make it safer to verify a driver’s license number directly with the issuer, potentially rendering OnlyFake’s AI-generated fake ID numbers useless.

We’ve already sacrificed too much—in safety and in plain old dollars—by sticking with physical IDs in a digital world. Transitioning to a modernized, privacy-preserving, digital-first system would be a major blow to fraudsters, money launderers and even terrorists worldwide. It’s time.


About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.