BAI (Believable AI Imagery): The Verification Problem of Low-Salience Synthetic Images

This conceptual short paper introduces Believable AI Imagery (BAI) as an operational concept for fully synthetic images that are visually ordinary, context-compatible, and unlikely to be escalated for verification. Existing frameworks, including deepfakes, cheapfakes, false context, manipulated content, provenance, and detector accuracy, remain essential, but they do not fully capture mundane documentary, workplace, administrative, or evidentiary-looking images that may pass before detection is considered. BAI names a verification problem rather than a new generation technique: a fully synthetic image with no underlying photographic record may still be accepted as an ordinary record, reference photograph, or supporting document. A preliminary observation using more than 100 fully AI-generated, low-salience images illustrates how ordinary verification interactions can produce mixed and unstable outcomes. Some images are classified as AI-generated, while others are treated as likely real photographs even after AI generation is explicitly raised. The core issue is not only detector accuracy, but suspicion, triage, and verification economics: whether an image will be selected for review before it is accepted as an ordinary record.

Liked Liked