[MrDeepfakes sharing written by TMBDF from 10/06/2023]
DFL 2.0 Model Settings and Performance Sharing
In this thread you can share and lookup performance of models at specific settings on various hardware configurations.
The original spreadsheet sadly has been lost due to original spreadhseet being nuked by zohosheets (thanks a lot...), below is the most I was able to find from other spreadsheets I found online (which most likely took the data from my spreadsheet anyway):
Adabelief Enabled | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
GPU | VRAM | CPU | RAM | Architecture | Resolution | AE Dims | E Dims | D Dims | D Mask Dims | Batch Size | Iteration Times | GPU Optimizer |
GTX 1060 | 6 | i5-4690K | 16 | LIAE-UD | 320 | 264 | 72 | 72 | 24 | 5 | 6500 | FALSE |
GTX 1080 Ti | 11 | i7-4770K | 16 | DF-UD | 320 | 320 | 72 | 72 | 16 | 8 | 700 | TRUE |
GTX Titan X | 12 | i7-2600K | 24 | LIAE-UDT | 224 | 256 | 64 | 64 | 16 | 6 | 1035 | TRUE |
RTX 2080 | 8 | i7-8700K | 32 | LIAE-UD | 256 | 256 | 64 | 64 | 22 | 8 | 500 | FALSE |
RTX 3050 | 8 | i5-4670 | 8 | DF-UD | 320 | 256 | 64 | 64 | 22 | 4 | 680 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | DF-UD | 320 | 320 | 96 | 96 | 32 | 7 | 1350 | TRUE |
RTX 3060 | 12 | R5-5600X | 32 | DF-UD | 384 | 256 | 64 | 64 | 22 | 8 | 1140 | TRUE |
RTX 3060 | 12 | R5-3600 | 32 | DF-UD | 384 | 256 | 64 | 64 | 22 | 8 | 1130 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | DF-UD | 320 | 360 | 90 | 90 | 22 | 7 | 1350 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | DF-UD | 320 | 288 | 80 | 80 | 22 | 9 | 1350 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | DF-UDT | 320 | 320 | 88 | 88 | 22 | 7 | 1350 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | DF-UDT | 256 | 300 | 80 | 64 | 22 | 17 | 1350 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | DF-UDT | 288 | 300 | 80 | 80 | 22 | 10 | 1350 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | DF-UDT | 320 | 360 | 90 | 90 | 22 | 5 | 1350 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | LIAE-UDT | 320 | 256 | 72 | 72 | 32 | 9 | 1350 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | LIAE-UD | 320 | 288 | 72 | 72 | 22 | 7 | 1350 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | LIAE-UD | 256 | 256 | 64 | 64 | 22 | 18 | 1350 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | LIAE-UD | 256 | 256 | 80 | 80 | 22 | 13 | 1350 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | LIAE-UD | 320 | 256 | 64 | 64 | 22 | 10 | 1350 | TRUE |
RTX 3060 | 12 | i5-8400 | 32 | LIAE-UDT | 320 | 320 | 88 | 88 | 22 | 7 | 1350 | TRUE |
RTX 3090 | 24 | R9-3900X | 32 | DF-UD | 384 | 512 | 112 | 112 | 16 | 8 | 1013 | TRUE |
RTX 3090 | 24 | R3-3900X | 32 | DF-UD | 320 | 512 | 112 | 112 | 16 | 16 | 1074 | TRUE |
RTX 3090 | 24 | i7-5820K | 48 | DF-UD | 416 | 416 | 104 | 104 | 26 | 8 | 1170 | TRUE |
RTX 3090 | 24 | R9-3900X | 32 | LIAE-UDT | 224 | 512 | 64 | 64 | 16 | 48 | 998 | TRUE |
RTX 3090 | 24 | R9-3900X | 32 | LIAE-UDT | 288 | 352 | 128 | 128 | 16 | 16 | 1320 | TRUE |
Tesla V100 | 16 | Colab | 25 | DF-UD | 384 | 352 | 88 | 88 | 16 | 8 | 1000 | TRUE |
Tesla V100 | 16 | Colab | 25 | DF-UD | 320 | 416 | 104 | 104 | 16 | 8 | 850 | TRUE |
Tesla V100 | 16 | Colab | 25 | DF-UD | 384 | 320 | 80 | 80 | 22 | 8 | 900 | TRUE |
Tesla V100 | 16 | Colab | 25 | DF-UD | 448 | 256 | 64 | 64 | 22 | 8 | 950 | TRUE |
Below are settings with no adabelief, mostly older results with base archis, consider these "legacy" and mostly irrelevant, do not train with adabelief disabled and keep in mind that without -D flag models are much heavier to train (achieving higher resolution is higher, but you could get better quality with something like DF-UT or LIAE-UT over -UD/UDT variants, but it will use a lot of VRAM, most people use -UD/UDT variants these days.
Adabelief Disabled | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
GPU | VRAM | CPU | RAM | Architecture | Resolution | AE Dims | E Dims | D Dims | D Mask Dims | Batch Size | Iteration Times | GPU Optimizer |
GTX 750 Ti | 2 | i5-4690K | 32 | LIAE | 112 | 256 | 64 | 64 | 22 | 4 | 1450 | FALSE |
GTX 970 | 4 | i7-2600K | 12 | DF | 96 | 256 | 64 | 64 | 22 | 4 | 700 | TRUE |
GTX 970 | 4 | i7-2600K | 12 | DF | 96 | 256 | 64 | 64 | 22 | 4 | 700 | TRUE |
GTX 1050 Ti | 4 | i5-3570K | 12 | DF | 128 | 192 | 48 | 48 | 36 | 2 | 645 | TRUE |
GTX 1050 Ti | 4 | i5-3570K | 12 | DF | 128 | 192 | 48 | 48 | 36 | 2 | 645 | FALSE |
GTX 1050 Ti | 4 | E5-1620 | 16 | LIAE | 128 | 128 | 80 | 48 | 16 | 4 | 520 | TRUE |
GTX 1060 | 6 | i5-4670K | 16 | DF | 192 | 256 | 64 | 64 | 22 | 7 | 1200 | FALSE |
GTX 1060 | 6 | i5-4590 | 16 | DF | 192 | 256 | 64 | 64 | 22 | 6 | 1400 | FALSE |
GTX 1060 | 6 | i5-4670K | 16 | DF | 128 | 450 | 64 | 64 | 22 | 10 | 750 | TRUE |
GTX 1060 | 6 | i7-6700HQ | 16 | DF-UD | 256 | 320 | 96 | 96 | 22 | 4 | 1800 | TRUE |
GTX 1650 | 4 | i5-9300H | 16 | DF | 128 | 256 | 64 | 64 | 22 | 6 | 780 | TRUE |
GTX 1650 | 4 | i5-9300H | 16 | DF | 128 | 256 | 64 | 64 | 22 | 8 | 960 | FALSE |
GTX 1660 Ti | 6 | i7-9700 | 16 | DF | 192 | 256 | 64 | 48 | 22 | 8 | 676 | TRUE |
GTX 1070 | 8 | i7-7700HQ | 16 | LIAE-UD | 224 | 288 | 96 | 96 | 16 | 4 | 800 | TRUE |
GTX 1070 Ti | 8 | R5-3600 | 16 | DF | 192 | 256 | 64 | 64 | 22 | 10 | 1030 | FALSE |
GTX 1080 | 8 | i7-8700K | 32 | DF | 192 | 256 | 64 | 64 | 22 | 8 | 780 | TRUE |
GTX 1080 Ti | 11 | W3680 | 12 | DF-UD | 288 | 256 | 80 | 80 | 20 | 12 | 1205 | TRUE |
GTX 1080 Ti | 11 | W3680 | 12 | DF-UD | 288 | 256 | 80 | 80 | 20 | 4 | 484 | TRUE |
GTX 1080 Ti | 11 | W3680 | 12 | DF-UD | 288 | 256 | 80 | 80 | 20 | 6 | 719 | TRUE |
GTX 1080 Ti | 11 | W3680 | 12 | DF-UD | 288 | 256 | 80 | 80 | 20 | 8 | 850 | TRUE |
GTX 1080 Ti | 11 | W3680 | 12 | DF-UD | 288 | 256 | 80 | 80 | 20 | 8 | 862 | TRUE |
GTX 1080 Ti | 11 | i5-4590 | 16 | LIAE | 192 | 256 | 64 | 64 | 22 | 8 | 670 | TRUE |
GTX 1080 Ti | 11 | i5-4590 | 16 | LIAE | 192 | 256 | 64 | 64 | 22 | 12 | 900 | TRUE |
Quadro M2200 | 4 | E3-1535M v6 | 32 | DF | 128 | 512 | 64 | 48 | 16 | 4 | 921 | TRUE |
RTX 2060 | 6 | i5-2500K | 8 | DF | 128 | 256 | 64 | 64 | 22 | 14 | 600 | TRUE |
RTX 2060 | 6 | i5-2500K | 8 | DF | 160 | 256 | 64 | 64 | 22 | 6 | 2500 | FALSE |
RTX 2060 | 6 | R5-2600 | 16 | LIAE | 256 | 256 | 64 | 64 | 22 | 2 | 1700 | FALSE |
RTX 2060 S | 8 | R5-3500 | 16 | DF-UD | 256 | 256 | 64 | 64 | 22 | 14 | 800 | TRUE |
RTX 2070 | 8 | R7-3800X | 16 | DF | 192 | 256 | 64 | 64 | 32 | 8 | 1100 | FALSE |
RTX 2070 | 8 | i7-8700 | 16 | DF | 144 | 256 | 64 | 64 | 22 | 8 | 400 | TRUE |
RTX 2070 S | 8 | R5-3600 | 16 | DF | 192 | 256 | 64 | 64 | 22 | 5 | 600 | FALSE |
RTX 2080 | 8 | i7-8700 | 16 | DF | 224 | 512 | 80 | 80 | 22 | 2 | 406 | TRUE |
RTX 2080 | 8 | i7-8700 | 16 | DF | 192 | 512 | 64 | 64 | 22 | 7 | 570 | TRUE |
RTX 2080 | 8 | i7-8700 | 16 | DF | 192 | 512 | 80 | 80 | 26 | 3 | 570 | TRUE |
RTX 2080 | 8 | i7-8700 | 16 | DF | 224 | 512 | 64 | 64 | 22 | 5 | 580 | TRUE |
RTX 2080 | 8 | R7-3800X | 16 | DF-UD | 320 | 256 | 64 | 64 | 22 | 5 | 478 | TRUE |
RTX 2080 Ti | 11 | i7-9700K | 16 | DF-UD | 256 | 256 | 64 | 64 | 22 | 20 | 800 | TRUE |
RTX 2080 Ti | 11 | i9-9900K | 32 | LIAE-U | 256 | 256 | 64 | 64 | 22 | 6 | 700 | TRUE |
RTX 2080 Ti x2 | 22 | R7-2700 | 32 | LIAE | 192 | 256 | 64 | 64 | 22 | 20 | 1230 | TRUE |
RTX 3090 | 24 | i7-9700K | 16 | DF-UD | 256 | 256 | 64 | 64 | 22 | 16 | 581 | TRUE |
RTX 3090 | 24 | R9-3900X | 32 | LIAE-UD | 384 | 384 | 116 | 116 | 16 | 6 | 942 | TRUE |
Tesla P100 | 16 | Colab | 16 | DF | 192 | 768 | 80 | 80 | 22 | 8 | 1000 | TRUE |
Tesla P100 | 16 | Colab | 16 | DF | 192 | 256 | 64 | 64 | 22 | 18 | 1200 | TRUE |
Tesla P100 | 16 | Colab | 16 | DF | 192 | 256 | 64 | 64 | 22 | 12 | 800 | TRUE |
Tesla P100 | 16 | Colab | 16 | DF-UD | 256 | 320 | 96 | 96 | 22 | 4 | 460 | TRUE |
Titan RTX | 24 | RT-3970X | 128 | DF | 400 | 256 | 64 | 64 | 22 | 6 | 1350 | TRUE |
Titan RTX | 24 | E5-1650 | 64 | DF | 256 | 256 | 64 | 64 | 22 | 16 | 4100 | TRUE |
Titan RTX | 24 | E5-1650 | 64 | DF | 224 | 256 | 64 | 64 | 22 | 20 | 4600 | TRUE |
Titan RTX x2 | 48 | RT-3970X | 128 | DF | 512 | 256 | 64 | 64 | 22 | 6 | 1700 | TRUE |
Titan RTX x2 | 48 | RT-3970X | 128 | DF | 512 | 256 | 64 | 64 | 22 | 8 | 2100 | FALSE |
Titan RTX x2 | 48 | RT-3970X | 128 | DF | 400 | 256 | 64 | 64 | 22 | 12 | 2200 | TRUE |
Please use my testing method to measure performance of your configuration, you need to run model twice, once in low and then in high load scenario.
Also make sure you are testing using the latest version of DFL and use the original builds, do not test on forks like MVE one, only iperov's version (unless you disable all additional features).
Low model load (you must test with those values):
RW: enabled
UY: disabled
EMP: disabled
LRD: disabled
GPU Opt on GPU: TRUE
GAN: disabled
Face Style Power: 0 (disabled)
Background Style Power: 0 (disabled)
TrueFace: 0 (disabled, only for DF archis)
Color Transfer: RCT
Clipgrad: FALSE
High model load (you must test with those values):
RW: disabled
UY: enabled
EMP: enabled
LRD: enabled (on GPU)
GPU Opt on GPU: TRUE
GAN: 0.1
GAN Dims: default value (16)
GAN Patch Size: 1/8 of model resolution
Face Style Power: 0.1
Background Style Power: 0 (disabled)
Color Transfer: RCT
Clipgrad: FALSE
If you want to provide additional settings using different paramaters for GAN, GAN DIMS, GAN PATCH SIZE, FSP, BSP, TF, CT, etc you can do so but they must be submitted along with standard testing method results in separate table using the 2nd template as a way to compare both.
Template with example values:
GPU | VRAM | CPU | RAM | OS | Page File Size | Model | Architecture | Resolution | Batch Size (High Load) | Batch Size (Low Load) | Iteration Times (High Load) | Iteration Times (Low Load) | VRAM Usage Before Training | VRAM Usage During Training | AE Dims | E Dims | D Dims | D Mask Dims | Inter Dims | Adabelief | GAN Dims | GAN Patch Size |
RTX 3090 | 24 | i9-13900K | 64 | Windows 11 | 256 | SAEHD | LIAE-UDT | 384 | 6 | 12 | 1000 | 500 | 1 | 23.6 | 320 | 64 | 64 | 22 | - | YES | 16 | 48 |
Secondary template with example values for sharing non-standard settings:
Batch Size (typical lowest) | Iteration Times (Highest Load) | VRAM Usage During Training | GPU Opt on GPU | Adabelief | LRD | GAN Dims | GAN Patch Size | GAN Power | Face Style Power | Background Style Power | Color Transfer | True Face | Clipgrad |
4 | 1400 | 23.6 | FALSE | NO | On CPU | 24 | 96 | 0.2 | 0.001 | 0.0001 | LCT | 0.01 | TRUE |