feffy380
6b3148fd3f
Fix min-snr-gamma for v-prediction and ZSNR.
...
This fixes min-snr for vpred+zsnr by dividing directly by SNR+1.
The old implementation did it in two steps: (min-snr/snr) * (snr/(snr+1)), which causes division by zero when combined with --zero_terminal_snr
2023-11-07 23:02:25 +01:00
青龍聖者@bdsqlsz
202f2c3292
Debias Estimation loss ( #889 )
...
* update for bnb 0.41.1
* fixed generate_controlnet_subsets_config for training
* Revert "update for bnb 0.41.1"
This reverts commit 70bd3612d8 .
* add debiased_estimation_loss
* add train_network
* Revert "add train_network"
This reverts commit 6539363c5c .
* Update train_network.py
2023-10-23 22:59:14 +09:00
Kohya S
0636399c8c
add adding v-pred like loss for noise pred
2023-07-31 08:23:28 +09:00
Kohya S
6d2d8dfd2f
add zero_terminal_snr option
2023-07-18 23:17:23 +09:00
Kohya S
92e50133f8
Merge branch 'original-u-net' into dev
2023-06-17 21:57:08 +09:00
Kohya S
045cd38b6e
fix clip_skip not work in weight capt, sample gen
2023-06-08 22:02:46 +09:00
Kohya S
5bec05e045
move max_norm to lora to avoid crashing in lycoris
2023-06-03 12:42:32 +09:00
Kohya S
ec2efe52e4
scale v-pred loss like noise pred
2023-06-03 10:52:22 +09:00
ddPn08
1f1cae6c5a
make the device of snr_weight the same as loss
2023-06-01 20:39:28 +09:00
Kohya S
f4c9276336
add scaling to max norm
2023-06-01 19:46:17 +09:00
AI-Casanova
9c7237157d
Dropout and Max Norm Regularization for LoRA training ( #545 )
...
* Instantiate max_norm
* minor
* Move to end of step
* argparse
* metadata
* phrasing
* Sqrt ratio and logging
* fix logging
* Dropout test
* Dropout Args
* Dropout changed to affect LoRA only
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com >
2023-06-01 14:58:38 +09:00
Kohya S
714846e1e1
revert perlin_noise
2023-05-15 23:12:11 +09:00
hkinghuang
bca6a44974
Perlin noise
2023-05-15 11:16:08 +08:00
hkinghuang
5f1d07d62f
init
2023-05-12 21:38:07 +08:00
青龍聖者@bdsqlsz
8d562ecf48
fix pynoise code bug ( #489 )
...
* fix pynoise
* Update custom_train_functions.py for default
* Update custom_train_functions.py for note
* Update custom_train_functions.py for default
* Revert "Update custom_train_functions.py for default"
This reverts commit ca79915d73 .
* Update custom_train_functions.py for default
* Revert "Update custom_train_functions.py for default"
This reverts commit 483577e137 .
* default value change
2023-05-11 21:48:51 +09:00
Kohya S
09c719c926
add adaptive noise scale
2023-05-07 18:09:08 +09:00
Pam
b18d099291
Multi-Resolution Noise
2023-05-02 09:42:17 +05:00
Kohya S
6a5f87d874
disable weighted captions in TI/XTI training
2023-04-08 21:45:57 +09:00
Kohya S
a876f2d3fb
format by black
2023-04-08 21:36:35 +09:00
AI-Casanova
dbab72153f
Clean up custom_train_functions.py
...
Removed commented out lines from earlier bugfix.
2023-04-08 00:44:56 -05:00
AI-Casanova
1892c82a60
Reinstantiate weighted captions after a necessary revert to Main
2023-04-02 19:43:34 +00:00
Kohya S
43a08b4061
add ja comment
2023-03-27 20:47:27 +09:00
AI-Casanova
4c06bfad60
Fix for TypeError from bf16 precision: Thanks to mgz-dev
2023-03-26 00:01:29 +00:00
AI-Casanova
518a18aeff
(ACTUAL) Min-SNR Weighting Strategy: Fixed SNR calculation to authors implementation
2023-03-23 12:34:49 +00:00
AI-Casanova
a3c7d711e4
Min-SNR Weighting Strategy: Fixed SNR calculation to authors implementation
2023-03-23 05:43:46 +00:00
AI-Casanova
64c923230e
Min-SNR Weighting Strategy: Refactored and added to all trainers
2023-03-22 01:27:29 +00:00