The infinite Loop #12
Financial Services Has a New Fraud Problem: AI That Looks Too Real

Srijan Nagar
Co-founder
·
Oct 15, 2025
I spent some time last week staring at two restaurant receipts.
Both were of the same amount. Both had the same restaurant logo, the same date, and the same time stamp. But one was fake. It took me about four minutes to spot which one. It shouldn’t have taken more than one – I work in fintech; I see fraud every day.
My colleague created the fake one in ninety seconds using a free tool she found online. She changed one line item, adjusted the price, and added a service charge. And the job was done. The scary part was not how easy it was to create. The scary part was how hard it was to catch.
The quiet crisis
UK businesses confirmed 421,000 fraud cases in 2024. That's a 13% increase from the year before, the highest number on record. When regulators asked what changed, the answer kept pointing to the same thing: generative AI.
But the restaurant receipts are just the visible edge of something deeper. They're the canary in the coal mine -- the thing you can see and hold and compare side-by-side. The rest of it is buried in data streams, verification systems, and underwriting processes.
The reality is that fraud has changed character. It used to be about breaking patterns, and now it's about perfecting them.
When fraud stops looking like fraud
Traditional fraud detection works by spotting anomalies. Transactions from an unusual location. A purchase amount that doesn't fit the profile. Behaviour that breaks from historical norms.
That approach worked when fraud meant exploitation.
AI fraud is different. It studies what normal looks like, then replicates it.
A lending company I know of started seeing loan applications that passed every verification check. The income documents looked correct. Employment histories were verified. Bank statements showed steady deposits and responsible spending. Credit scores were reasonable. It had everything their underwriting system wanted to see – and that was precisely the problem
A closer look revealed sophisticated fabrications. Bank statements that were generated, not real. Income documents matched employer templates perfectly because they were created by tools trained on thousands of real templates. References existed but were part of coordinated networks.
None of it triggered their fraud detection systems because nothing looked wrong. The applications weren't outliers. They were far too perfect.
The new fraud playbook
Here's what makes this different: traditional fraud was opportunistic. Someone gets your card details, makes charges until you notice. Someone steals an identity, opens accounts, and moves fast.
AI fraud is patient. It builds profiles slowly. Creates thin credit files, ages them properly, and establishes payment history. Generates bank statements that show exactly the income-to-debt ratio lenders want. Fabricates employment records that match industry standards.
Consider also the case of document fraud. You used to be able to spot fake bank statements because the formatting was wrong, the fonts didn't match, and the math had errors. Now you're looking at PDFs generated by tools that have analysed thousands of real statements. They know exactly how each bank formats its data. They include the correct headers, the right transaction codes, and proper date formatting. They even add realistic irregularities, so it doesn't look too clean.
The tells aren't in the documents anymore. They're in the patterns across documents. The way the spending aligns a little too perfectly with what credit models reward. The absence of financial chaos that real people create.
What verification systems miss
Most fraud detection operates at the document level. Does this statement have the right format? Does this income figure match the employer database? Does this address check out?
Those questions still matter, but they're not enough. Because if the answer to all of them is "yes," but the person doesn't exist, your verification passed, but your risk assessment failed.
The shift happening now is from verifying documents to verifying coherence. Not "does this bank statement look real" but " Was this financial life really lived — or just created on paper?
Real people have messy finances. Irregular spending reflects actual life. Random ATM withdrawals. Forgotten subscriptions. Mistakes that were later corrected. Habits changed.
Synthetic profiles are too optimised. They display financial behaviour designed to pass scoring models, not live a life. The spending curves are too smooth. The income deposits are too regular. Nothing unexpected appears.
The difference is subtle. You can't write rules for it. You need systems that understand what authentic financial behaviour feels like, not just what it looks like.
The detection gap
Here's the problem for financial services: the tools to generate convincing fraud are getting better faster than the tools to detect it.
A fake receipt took ninety seconds to create and four minutes to identify. That ratio is getting worse. Soon it'll be thirty seconds to create and impossible to identify without specialised tools.
The same applies across lending, underwriting, KYC processes, and transaction monitoring. The fraud is being generated by systems trained on millions of real examples. They know what passes. They know what verification looks for. They optimise for it.
Meanwhile, most detection systems are still running rule-based logic built for a previous generation of fraud. If this field doesn't match, flag it. If this ratio seems wrong, escalate it. If this pattern breaks norms, investigate it.
Those rules catch obvious fraud. They miss sophisticated fraud because sophisticated fraud doesn't break rules. It plays by them perfectly.
The arms race
There's a window right now where financial services companies can adapt. AI fraud is getting better, but detection systems still catch some of it. Manual review still works for edge cases.
That window is closing. Not in years. Months.
The companies moving now are building detection systems that think differently. Not rule-based verification but pattern-based authenticity checking. That means using similar technology to what fraudsters use, but inverted. Machine learning is trained not on what fraud looks like, but on what authenticity looks like. Systems that can spot when optimisation has replaced humanity.
We've been thinking about this for a while at FinBox, particularly around bank statement analysis. The question isn't just "can we verify this statement is from a real bank," but "can we verify this represents a real person's financial life?" Those are different questions requiring different approaches.
What happens next?
The receipt problem will be solved. Companies will stop using reimbursements, switch to corporate cards, and build better expense controls. But the larger issue remains. AI can now generate financial artefacts that pass traditional verification. That capability exists permanently. The tools are available. The knowledge is spreading.
Which means every financial service that relies on document verification, every lending process that checks income statements, every KYC system that validates identity documents, all of them need to assume those documents might be synthetic.
The question isn't whether this will affect your business. It's whether you'll adapt before it does.
I think about those four minutes I spent comparing receipts. How I almost missed it. If I hadn't been actively looking for fraud, I would have missed it.
That's the future most verification systems are heading toward. The fraud will be good enough that you can't spot it just by looking. You'll need systems that can see what you can't.
The good news: those systems exist now. The bad news: most companies aren’t opting for them fast enough.
The clock is ticking.