A new study reveals that the human brainโs remarkable talent for face recognition is no match for todayโs most advanced AI face generators. Even people with exceptional face-memory skills the so-called โsuper-recognisersโ who make up just 1โ2 % of the population perform only marginally better than everyone else when trying to tell real photographs from synthetic ones. And almost everyone, regardless of skill level, is far too sure of their own accuracy.

The research, led by Dr James Dunn at UNSW Sydney and colleagues at the Australian National University, was published in the British Journal of Psychology. It tested 125 participants online, including 36 carefully screened super-recognisers and 89 control volunteers with typical or above-average face-recognition ability. Participants viewed a carefully curated set of faces (obvious glitches and artefacts had already been removed) and decided whether each was a real photograph or an AI-generated image.
Results were sobering. The control group averaged just 50.7 % accuracy barely above chance. Super-recognisers did better, reaching 57.3 %, but the edge was modest (Cohenโs d = 0.55). What stood out was the overconfidence: every group rated their own performance far higher than their actual results justified.

AI can create Caucasian faces that look more real than actual humans โ study
The study also uncovered why modern AI faces are so deceptive. Using deep neural networks trained on human face identity, the researchers showed that synthetic faces are โhyper-averageโ they cluster more tightly at the centre of โface spaceโ than real faces do. They look mathematically perfect, almost too ideal to be true. Super-recognisers appear to pick up on this subtle statistical cue and correctly label the hyper-average faces as artificial more often than controls do. This is the first clear mechanistic link between evolved face-processing expertise and the new challenge of detecting AI-generated identities.
โUp until now, people have been confident of their ability to spot a fake face,โ said Dr Dunn. โBut the faces created by the most advanced face-generation systems arenโt so easily detectable anymore.โ Co-author Dr Amy Dawel added that synthetic faces can feel โtoo good to be true.โ
The implications are immediate and unsettling. Overconfidence leaves ordinary users and even security professionals vulnerable to AI-powered scams, fabricated social-media profiles, fake dating accounts, and identity fraud on professional networks. As generative AI becomes cheaper and more realistic, the gap between our intuitive trust in faces and reality is widening.
Think you can do better?
Six of the 12 faces shown in the original studyโs demonstration test are AI-generated. Can you spot the fakes? (The study team provides answers and a full interactive demo of the test here: bit.ly/AIFacetest.)
The research highlights an urgent need for public awareness and smarter tools. Training can help brief sessions highlighting rendering artefacts have lifted detection rates in other studies but the deeper solution may lie in understanding that our ancient face-recognition system was never designed for hyper-realistic synthetic faces. Until then, a healthy dose of scepticism may be the best defence.
References
Nightingale, S. J., Wade, K. A., & Watson, D. G. (2022). AI-synthesized faces are indistinguishable from real faces. Proceedings of the National Academy of Sciences, 119(12), e2120481119.
Chow, J. K., McGugin, R. W., & Gauthier, I. (2026). Domain-general object recognition predicts human ability to tell real from AI-generated faces. Journal of Experimental Psychology: General.
Dunn, J. D., et al. (2022). Face-information sampling in super-recognisers. Psychological Science, 34(12), 1390โ1403. (Foundational work establishing individual differences in face-space representation that later informed AI-detection mechanisms.



