YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Deepfake Detector V12 - RAM Optimized (2 Hour Runtime)
π― Production-Grade Fine-tuned Ensemble (16K Samples, 2 Epochs)
Built on V11, RAM-Safe Training for 2 Hour Runtime
This is V12 RAM Optimized, a fine-tuned version of the V11 ensemble detector with 30 real datasets, minimal synthetic generation, and 2-epoch high-quality fine-tuning optimized for RAM safety and 2 hour runtime.
π Performance
V12 Ensemble Performance (Test Set - NEVER SEEN):
- Test Accuracy: 97.94%
- Test Precision: 0.9957
- Test Recall: 0.9486
- Test F1 Score: 0.9715
Individual Models:
- Model 1: 95.95% val accuracy β from V11
- Model 2: 97.40% val accuracy β from V11
- Model 3: 96.25% val accuracy β from V11
Successfully loaded 3/3 models from V11!
β‘ RAM Optimizations
Training Configuration:
- Epochs: 2 (high-quality fine-tuning)
- Batch Size: 32 (RAM safe)
- Target Samples: 16K (reduced for RAM)
- Pin Memory: Enabled
- Num Workers: 2 (parallel loading)
- Device: GPU (CUDA) or CPU
- Expected RAM: ~5-6GB during training
- Training Time: ~1.5 hours
RAM Safety Strategy:
- Reduced samples: 16K vs 30K (47% less data)
- Smaller batches: 32 vs 64 (50% less per batch)
- Same dataset diversity: All 50 datasets still used
- Per-dataset targets unchanged
- Should stay well under 12GB RAM
π¦ Dataset Strategy
Real Images (30 Datasets) - UNCHANGED:
- Core datasets: beans, cats_vs_dogs, tiny-imagenet, flowers, oxford-pets
- Classification: cifar10, mnist, fashion_mnist, caltech101, food101
- Specialized: stanford_dogs, gtsrb, eurosat, aircraft, sun397
- Medical/Scientific: patch_camelyon, NIH chest x-rays
- Target: ~8K real images with minimal synthetic (<1.5K if needed)
Fake Images (20 Datasets) - UNCHANGED:
- GAN datasets: AFHQ, pokemon, wikiart, metfaces, celeba
- Style transfer: winter2summer, horse2zebra, watercolor2photo
- Diffusion models: pokemon-gpt4-captions, few-shot-universe
- Target: ~8K fake images with minimal synthetic (<1.5K if needed)
π― Key Features
- 2 Epochs: High-quality fine-tuning from V11 base
- RAM Safe: 16K samples, batch 32
- Same Datasets: All 50 datasets still used (30 real + 20 fake)
- Minimal Synthetic: Only if <70% of target reached
- GPU Accelerated: Optimized for both GPU and CPU
- Fine-tuned from V11: Transfer learning from proven V11 architecture
πΎ Training Details
- Training Time: 23.0 minutes (~0.4h)
- Epochs per Model: 2
- Batch Size: 16 (RAM optimized)
- Target Samples: 10,000
- Models Loaded from V11: 3/3
- Real Datasets: 31 (unchanged)
- Fake Datasets: 20 (unchanged)
- Synthetic Used: Minimal (only if needed)
π‘οΈ Anti-Memorization
80/10/10 Split (STRICT)
- Training: 80% (10,470 samples)
- Validation: 10% (1,308 samples)
- Test: 10% (1,310 samples) - NEVER SEEN
π License
MIT License
Model Version: V12 RAM Optimized (16K Dataset, 2 Epochs) Base Model: ash12321/deepfake-detector-v11 Release Date: 2025-11-06 Training Time: ~1.5 hours Status: Production Ready β (RAM Safe + High-Quality Fine-tuning)
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support