Should businesses fear the impact of the fake AI ID?
By Jimmy Roussel, CEO, IDScan.net
Is AI anti-fraud’s largest adversary, or most significant ally in the years to come? For identity verification, business risk mitigation, and customer protection, the answer is both.
Businesses experience a range of fraudulent identities coming through their doors. Some criminals attempt to bypass security systems with obviously fake documentation,, caught right away by the human eye. Others are equipped with high quality fake IDs and increasingly clever tactics, which can only be detected with specialized authentication hardware and software. Cheaper production methods, foreign imports, and darkweb knowledge-sharing between fraudsters have accelerated the availability of high-quality fake identity documents and therefore the ease at which criminals can execute complex identity scams.
The impact of ID fraud is impossible to assess in its entirety given the sheer number of business sectors it impacts, and the fact that it is not always caught or reported. If we consider average business losses of 5% per year as a direct result of ID fraud, the true cost is likely much larger than currently reported, with Javelin research suggesting the figure is upwards of $50 billion per year for US businesses.
Latest Read: Taking Generative AI from Proof of Concept to Production
Rise of the AI ID
In 2025, most fake IDs look genuine to the human eye. Recognizing the threats they face, leaders in all manner of industries, from retail and financial institutions to car dealerships and casinos, are seeking methods to combat this issue. In the US, organizations are deploying digital and physical ID verification methods to stop fraudsters using fake IDs.
Businesses are recognizing the need to stay ahead of the threat landscape they are intrinsically part of, paying particular attention to three key problem areas:
- As fraud methods evolve, verification solutions must be up to the challenge. Outdated systems can and will be fooled by new methods, harming the business reputation and impacting customer safety and spending habits.
- The accessibility of fraudulent identities is rising, with the darkweb becoming more and more prominent. Our recent exploration into these markets saw AI-generated ID images readily available for as little as $5.
- Consumers are fearful of new fraud methodologies, chiefly related to rising confusion around AI. Recent IDScan research from 2024 highlighted that 78% of consumers pointed to the misuse of AI as their core fear around identity protection. Equally, 55% believe current technology isn’t enough to protect our identities. Businesses cannot sit idle while AI damages consumer trust, as impacts to bottom lines will swiftly follow.
The reality of using AI to generate fake IDs is rather basic – at least compared to what most businesses envision when they think of AI and identity fraudsters joining forces. Darkweb suppliers rely on PDF417 and ID image generators, using varying degrees of automation to match data inputs onto a contextual background. Easy-to-use tools such as Thispersondoesnotexist make it simple for even low-skilled fraudsters to combine a quality fake ID image with a synthetic identity.
However, businesses must acknowledge consumer fears and adapt their security processes not for where AI-IDs are right now but how they will rapidly evolve in the future. By demonstrating that existing solutions are up to the challenge, businesses can put customers at ease and protect their bottom lines.
With this in mind, IDScan.net took to the dark web and purchased 200 ID images, putting them to the test against the latest in identity verification solutions.
So how effective are AI IDs at committing fraud?
Catch rates for non-AI generated IDs processed through IDScan.net’s proprietary and third-party checks average 95%. In our study, we caught 99.6% of AI-generated fake IDs.
Our analysis revealed the AI being used to create these fraudulent IDs is not yet able to compete with the sophistication of our identity verification systems. Simply, AI-IDs have trouble with finer details that systems specifically look for, including differing templates and data syntax across states. The scale of fake ID user requirements weigh on AI-ID efforts; each state’s ID has its own system for encoding personal data into the barcode, and even the slightest discrepancy in this is enough for ID verification systems to identify them. When the data is in the incorrect order or format, it is a clear indication that the ID is fraudulent.
Additionally, use of AI-generated fake ID images often comes with specific digital behaviors that signal to verification systems that a particular document may not be genuine, and is therefore flagged as suspicious. These suspicious activities can trigger a deeper review of the ID image and face match, which allows a deeper verification dive to provide more thorough testing, unearthing fake IDs through a multi-layered approach to security.
Also Read: How AI can help Businesses Run Service Centres and Contact Centres at Lower Costs?
However, as AI continues to evolve, it’s reasonable to expect that the next generation of AI IDs will be more effective than their predecessors.
Where are identity solutions winning?
Across our study, we found 24% of AI IDs showed evidence of photo tampering. While not always immediately obvious to the human eye, verification systems identified the smallest discrepancies in document tampering. While this is encouraging for businesses that have implemented a document tampering solution, it’s evidence that barcode and OCR validation alone may not be enough to identify evidence of tampering. Here, we must stress the importance of not relying on a single system to quash the diversified threat of identity fraud.
Our tests also found a few other key areas that sound the alarm on an AI ID, including blurring across the document and either a complete absence or incorrect presence of security features like watermarks, which are present on IDs in specific places and sizes according to the state’s template.
Lastly, AI IDs generally lack a sophisticated understanding of barcode data formatting. Every jurisdiction and document type has different expected fields. As things stand, AI is failing to properly replicate these factors in the majority of cases.
State by State
The diversification of identity documents throughout the US creates an ever-changing cycle of new IDs, introducing risk to businesses that may lack the knowledge or software applications to assess at a nationwide level.
Our study revealed that the quality of fake IDs, analyzed through various physical and software-based fraud prevention tools, varied state by state. While states such as New York, Texas, and Arizona see the most frequent use of fake IDs, according to IDScan.net’s 2023 and 2024 Annual Fake ID Reports, they were also the easiest to catch. We caught 100% of the fakes from these high-volume states.
The reason for this may come back to quality. In states where fake ID markets are more saturated, and therefore more common, fraudsters producing AI IDs may operate on smaller margins, producing IDs quicker and ultimately putting fewer resources behind producing a truly robust fake.
Along with the data syntax, AI fake ID generators struggle with the differing state design templates. Created at a state level; ID data placement and designs differ greatly, are updated at different times, and must be in the exact correct location and size. If an ID claiming to be from Arizona has background design images from New Mexico’s ID, it can be instantly flagged as fraudulent.
AI IDs in the future
From our study, the immediate horizon for businesses concerned about the rise of fake AI-IDs seems bright. On one hand, current systems are good enough to catch the large majority, on the other, organisations already exploring mitigation strategies for the rising threat of AI-IDs are one step ahead, and will stay diligent around the battle between fraudsters and the solutions in place to stop them.
However, we must temper optimism: AI IDs will improve at a faster rate and we must not be complacent. To mitigate the risk of ever-improving AI fraud, businesses must have AI-ready methods of their own, by way of best-in-class identity solutions.
Only time will tell how sophisticated AI IDs will become, but if we get ahead of the challenge now, we are better placed to mitigate their widespread risks in the future.
Comments are closed.