Psst… We received a tip…

Banks Keep Getting Catfished

Know your customer (KYC), for banks, is a lot like dating.
You meet a stranger, check their socials, and quietly hope you’re not the one paying for their chaos later.

This bank shows up to the “date” five minutes early.
New app, smooth signup, fraud team watching dashboards in the back. Everything feels under control.

Then the customer appears.
Perfect smile in the selfie, 10/10 lighting, no “shot this in the car at 11 p.m.” vibe. The face passes KYC. The ID scan looks clean. The account opens and money starts to move.

At first, it looks like every other new user.
Some card spend. A few marketplace payouts. Nothing wild. Then the pattern flips: chargebacks stack up, refunds start flying, fake sellers cash out and vanish. By the time the fraud team really digs in, the trail is cold and the loss is booked.

That “customer” never existed.
The face came from a generative model that doesn’t take bad photos. It’s a mask built for passing quick checks.

This is the reality for banks and big marketplaces right now.
The perpetrators have AI-made faces while the bank has human-sized limits and tools that were trained on neat, staged images instead of the crooked smiles, half-blinks, greasy screens, and imperfect angles real people bring to a selfie.

The gap between those two worlds is expensive.
In the AI vs AI face war, the advantage goes to whoever actually invests in that messy, real-world expression data and wires it into onboarding. Everyone else is just hoping their next “perfect” customer isn’t another very pricey date.

Here’s the inside scoop

This week’s patent comes out of Shandong Artificial Intelligence Institute and Qilu University of Technology in China. It’s trying to solve the exact problem fraud teams quietly worry about. When a selfie looks perfect, how do you tell if there’s a real person behind it or a fraudster spinning faces in the cloud? The answer lines up neatly with our bank-on-a-bad-date story. 

Instead of hunting for tiny pixel glitches, the inventors train a system to understand how human expressions actually behave, then use that “expression sense” to spot impostors.

How the system works

First, they teach the model to read feelings, not fakes.
They start with a big emotion dataset of real faces and train a model called FERtrans to recognize expressions like happy, sad, angry, surprised, disgusted, and neutral.

On paper it sounds basic, but the important part is what happens inside the model. It learns tiny patterns, like how eyes, mouth, cheeks, and brows usually move together when a real person smiles or frowns. That becomes its gut instinct for “this looks like a human face.”

Then they turn that expression brain into a deepfake detector.
They build a new dataset called AIR-Face. Tens of thousands of AI-generated faces labeled fake, plus tens of thousands of real faces labeled real. Every face in this set, human or synthetic, gets pushed through the same expression model. At this stage, the system doesn’t care if someone looks happy or sad. It just grabs the internal feature vector for each face, a kind of fingerprint for how that face appears to the model.

Now comes the clever bit.

Instead of stacking yet another giant classifier on top, they store all those feature vectors from AIR-Face in a library. For any new selfie, they run it through FERtrans, grab its feature vector, and ask the simple question, “which crowd does this look more like, the real faces or the fake ones?” They use cosine distance, which is basically just math for “how similar is this pattern to the patterns I already know?” If your new face lands near the cluster of fakes, it gets flagged. If it clusters with real faces, it gets a pass.

The bigger picture

The fraud fight now is AI vs AI. One side spins up faces that never existed; the other has milliseconds to decide if that “person” gets to move money or open an account. Deepfake-driven scams alone are already estimated at about $12 billion in 2023, on track for $40 billion by 2027 if current trends hold.Medium 

Face checks are just one front. The real moat is data across everything: how money moves, how devices show up, how accounts log in, how merchants ship and refund, even how people type and tap. Any one signal is weak. Collected and labeled over years, they’re what keep your fraud curve from blowing up.

Publishing the future

Fraud-as-a-service (FaaS) now powered by AI

Fraud is now transforming towards subscription-based business models. On the dark web, “fraud-as-a-service” (FaaS) shops now sell plug-and-play toolkits, such as deepfake selfie generators, synthetic ID builders, and prebuilt scam scripts that anyone with $20–$50 can buy. (Sumsub) Reports from identity-verification providers and banks say these FaaS platforms have industrialized cybercrime, driving triple-digit surges in AI-powered attacks across payments, crypto, and social media. (FinTech Magazine) Deepfakes already make up a significant proportion of biometric fraud attempts, and digital document forgeries have exploded as AI tools make fake IDs cheap to produce at scale. (Entrust)

If fraud is becoming a service, defense needs to consider how to respond to this model. Over the next few years, the serious players will stop thinking in terms of “a fraud model” and start thinking in terms of “a fraud data platform.” That means combining faces with everything else: device fingerprints, IP history, payment flows, login behaviour, merchant patterns, even how users tap and scroll.

How the institutes are responding

You can already see this future in how banks are teaming up. In the UK, major banks like HSBC, NatWest, Santander and Monzo are now sharing fraud intel with tech giants such as Amazon, Google and Meta through Stop Scams UK, after pilots showed they could get ahead of scams by pooling data and signals. (Computer Weekly)

Regulators and crime agencies are nudging the same way, arguing that when fraud accounts for over 40% of all reported crime, nobody can solve it in their own silo. (Computer Weekly)

The attackers rent tools, and the defenders will rent and share data. If you’re running a product with money flowing through it, decide whether you want to be downstream of fraud-as-a-service, or upstream with your own “data-as-a-service” defenses, and start plugging your signals into bigger shared graphs instead of guarding them like secrets (within regulations, of course!).

The patent press travels far and wide…

Extra! Extra! Read All About It!

Deepfake and AI-media detection startups are raising serious rounds, and identity verification as a whole is on track to become a tens-of-billions business by 2030. (Grand View Research)

Reality Defender sells deepfake and AI-media detection into banks, fintechs, and big enterprises. It recently expanded its Series A to $33 million to scale tools that scan images, audio, and video for synthetic content. (PR Newswire)

Doppel is riding the “AI-powered social engineering” wave. It raised a $70 million Series C in November 2025, bringing total funding to about $124 million and valuing the company at over $600 million as it builds defenses against voice, video, and chat-based scams. (PR Newswire)

On the voice side, Pindrop works with major banks and call centers to spot phone fraud and voice deepfakes. In 2024 it secured $100 million in new financing to grow its fraud and deepfake detection tech as deepfake call attempts surged more than 1,300%. (PR Newswire)

Italy’s IdentifAI raised €5 million in July 2025 to expand its deepfake-detection and content-authenticity platform across Europe and into the U.S. (Vestbee)

All of this aligns with a booming identity verification market. Analysts peg it at about $9.9 billion in 2022, heading toward roughly $30+ billion by 2030, driven by KYC, remote onboarding, and fraud compliance. (Grand View Research)

So when you look at a patent about catching AI-made faces, you’re not just seeing a niche trick. You’re looking at the R&D layer feeding into a fast-growing stack of products, including deepfake detection APIs, fraud-intel platforms, and IDV services that all sell the same thing in different wrappers… a cleaner answer to “Is this person actually real?”

The paper boy always delivers

This patent is a peek at how trust gets rebuilt when faces can be forged on demand. A model studies the tiny quirks of real human expression, a second model sorts real from synthetic, and the whole system prevents bad actors from getting through the safeguards.

As more of our lives run through screens, the question “is this person real?” becomes a background check many products must answer. Then, verification turns into infrastructure. Fraud turns into software. And the line between identity and behavior gets thinner every year.

Read the sources:

US 2024/0378921 A1, Facial Expression-Based Detection Method for Deepfake by Generative AI · Published Nov 14, 2024 · Assignees: Shandong Artificial Intelligence Institute & Qilu University of Technology · Status Pending

CN116469151B, A generative AI face detection method based on facial expressions · Published Feb 02, 2024· Assignees: Shandong Artificial Intelligence Institute & Qilu University of Technology · Status Active

A special message from the team

Psst…

Thank you for reading and a special thanks to Icehouse Ventures for featuring us in their newsletter!

Shoutout to our new subscribers who came from the feature, we’re glad you’re here :)

Know someone who loves to stay on top of the latest tech trends? Share this newsletter so they can be in on the hottest insights.

Got feedback? We are still hot off the press ourselves, so we love your feedback. All readers are welcome (we read every reply)!

For the nerds

Keep Reading