🔍 Deepfake Detection Space
This space collects a series of state-of-the-art methods for deepfake detection, allowing for free and unlimited use.
Training & Performance
All methods have been trained using the DeepShield dataset, on images generated with Stable Diffusion XL and StyleGAN 2. You can expect performance comparable to the results shown in Dell'Anna et al. (2025).
Understanding the Results
- Prediction: Tells if an image is Real or Fake.
- Confidence: The confidence with which the model determines if the image is real or fake.
- Elapsed Time: The time the model needed to make the prediction (excluding preprocessing or model building).
Understanding the Results produced by "ALL"
- Runs all available detectors (R50_TF, R50_nodown, CLIP-D, P2G, NPR) sequentially on the input image.
- Produces a Weighted Majority Vote verdict (Real/Fake). Each model's vote is weighted by a fixed importance score (summing to 1) based on user ranking and its confidence score. Only confident predictions (> 0.6) are counted.
- You can find the specific weights used for each model in the "⚖️ Weight Details" menu below.
- Also generates a Confidence Plot visualizing each model's score and a textual Explanation of the consensus.
- In the plot, Green bars indicate a Real prediction, while Red bars indicate a Fake prediction.
Note
⚠️ Due to file size limitations, model weights need to be downloaded automatically on first use. This may take a few moments.
⚠️ To provide a free service, all models run on CPU. The detection process may take a few seconds, depending on the image size and the selected detector.
Choose which deepfake detection model to use
Detector Weights
The weights are assigned based on the ranking (based on the results of TrueFake: A Real World Case Dataset of Last Generation Fake Images also Shared on Social Networks): CLIP-D > R50_TF > R50_nodown > P2G > NPR, such that their sum equals 1.
| Detector | Weight |
|---|---|
| CLIP-D | 0.30 |
| R50_TF | 0.25 |
| R50_nodown | 0.20 |
| P2G | 0.15 |
| NPR | 0.10 |
ALL
- Description: Runs all available detectors (R50_TF, R50_nodown, CLIP-D, P2G, NPR) sequentially on the input image.
- Results: Produces a Majority Vote verdict (Real/Fake) considering only confident predictions (> 0.6). Also generates a Confidence Plot visualizing each model's score and a textual Explanation of the consensus.
R50_TF
- Description: A ResNet50 architecture modified to exclude downsampling at the first layer. It uses "learned prototypes" in the classification head for robust detection.
- Paper: TrueFake: A Real World Case Dataset of Last Generation Fake Images also Shared on Social Networks
- Code: GitHub Repository
R50_nodown
- Description: A ResNet-50 model without downsampling operations in the first layer, designed to preserve high-frequency artifacts common in synthetic images.
- Paper: On the detection of synthetic images generated by diffusion models
- Code: GitHub Repository
CLIP-D
- Description: A lightweight detection strategy based on CLIP features. It exhibits surprising generalization ability using only a handful of example images.
- Paper: Raising the Bar of AI-generated Image Detection with CLIP
- Code: GitHub Repository
P2G (Prompt2Guard)
- Description: Uses Vision-Language Models (VLMs) with conditioned prompt-optimization for continual deepfake detection. It leverages read-only prompts for efficiency.
- Paper: Conditioned Prompt-Optimization for Continual Deepfake Detection
- Code: GitHub Repository
NPR
- Description: Focuses on Neighboring Pixel Relationships (NPR) to capture generalized structural artifacts stemming from up-sampling operations in generative networks.
- Paper: Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection
- Code: GitHub Repository
References
- Dell'Anna, S., Montibeller, A., & Boato, G. (2025). TrueFake: A Real World Case Dataset of Last Generation Fake Images also Shared on Social Networks. arXiv preprint arXiv:2504.20658.
- Corvi, R., et al. (2023). On the detection of synthetic images generated by diffusion models. ICASSP.
- Cozzolino, D., et al. (2023). Raising the Bar of AI-generated Image Detection with CLIP. CVPRW.
- Laiti, F., et al. (2024). Conditioned Prompt-Optimization for Continual Deepfake Detection. arXiv preprint arXiv:2407.21554.
- Tan, C., et al. (2024). Rethinking the up-sampling operations in cnn-based generative network for generalizable deepfake detection. CVPR.