How Credify works
At a high level, Credify turns an image or video into a single “AI probability” score plus a clear REAL/FAKE label. Under the hood there are three pieces working together.
You upload media
The app accepts JPEG/PNG/WebP images and short video clips. Files are processed inside this Space — there’s no external API call. For videos, a handful of frames are sampled across the clip instead of analyzing every frame.
Two models score the content
Model A looks for low-level texture and frequency patterns that often show up in generated images. Model B looks at the semantics of the scene: does it look like something that could exist in the real world given what it shows?
Content Credentials are checked
If the file carries C2PA Content Credentials, Credify reads the manifest. When the manifest says the asset was generated by an AI system and the signature is valid, Credify treats that as a very strong signal and pushes the AI score up accordingly.
Scores are fused into a verdict
The two model scores are blended into a single AI probability. If the score is above 0.5, the preview labels the asset as FAKE; otherwise it’s labeled REAL.
Using this in practice
In a production setting, Credify’s output would be one input into a larger decision engine — combined with policy rules, user history, and human review.