Kohya S
0676f1a86f
Merge pull request #1009 from liubo0902/main
...
speed up latents nan replace
2023-12-21 21:37:16 +09:00
liubo0902
8c7d05afd2
speed up latents nan replace
2023-12-20 09:35:17 +08:00
Kohya S
912dca8f65
fix duplicated sample gen for every epoch ref #907
2023-12-07 22:13:38 +09:00
Isotr0py
db84530074
Fix gradients synchronization for multi-GPUs training ( #989 )
...
* delete DDP wrapper
* fix train_db vae and train_network
* fix train_db vae and train_network unwrap
* network grad sync
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com >
2023-12-07 22:01:42 +09:00
Kohya S
383b4a2c3e
Merge pull request #907 from shirayu/add_option_sample_at_first
...
Add option --sample_at_first
2023-12-03 21:00:32 +09:00
feffy380
6b3148fd3f
Fix min-snr-gamma for v-prediction and ZSNR.
...
This fixes min-snr for vpred+zsnr by dividing directly by SNR+1.
The old implementation did it in two steps: (min-snr/snr) * (snr/(snr+1)), which causes division by zero when combined with --zero_terminal_snr
2023-11-07 23:02:25 +01:00
Yuta Hayashibe
2c731418ad
Added sample_images() for --sample_at_first
2023-10-29 22:08:42 +09:00
Kohya S
9d6a5a0c79
Merge pull request #899 from shirayu/use_moving_average
...
Show moving average loss in the progress bar
2023-10-29 14:37:58 +09:00
Kohaku-Blueleaf
1cefb2a753
Better implementation for te autocast ( #895 )
...
* Better implementation for te
* Fix some misunderstanding
* as same as unet, add explicit convert
* Better cache TE and TE lr
* Fix with list
* Add timeout settings
* Fix arg style
2023-10-28 15:49:59 +09:00
Yuta Hayashibe
0d21925bdf
Use @property
2023-10-27 18:14:27 +09:00
Yuta Hayashibe
efef5c8ead
Show "avr_loss" instead of "loss" because it is moving average
2023-10-27 17:59:58 +09:00
Yuta Hayashibe
3d2bb1a8f1
Add LossRecorder and use moving average in all places
2023-10-27 17:49:49 +09:00
青龍聖者@bdsqlsz
202f2c3292
Debias Estimation loss ( #889 )
...
* update for bnb 0.41.1
* fixed generate_controlnet_subsets_config for training
* Revert "update for bnb 0.41.1"
This reverts commit 70bd3612d8 .
* add debiased_estimation_loss
* add train_network
* Revert "add train_network"
This reverts commit 6539363c5c .
* Update train_network.py
2023-10-23 22:59:14 +09:00
Kohya S
025368f51c
may work dropout in LyCORIS #859
2023-10-09 14:06:58 +09:00
Yuta Hayashibe
27f9b6ffeb
updated typos to v1.16.15 and fix typos
2023-10-01 21:51:24 +09:00
Kohya S
4cc919607a
fix placing of requires_grad_ of U-Net
2023-10-01 16:41:48 +09:00
Kohya S
81419f7f32
Fix to work training U-Net only LoRA for SD1/2
2023-10-01 16:37:23 +09:00
Kohya S
d39f1a3427
Merge pull request #808 from rockerBOO/metadata
...
Add ip_noise_gamma metadata
2023-09-24 14:35:18 +09:00
Disty0
b64389c8a9
Intel ARC support with IPEX
2023-09-19 18:05:05 +03:00
rockerBOO
80aca1ccc7
Add ip_noise_gamma metadata
2023-09-05 15:20:15 -04:00
Kohya S
c142dadb46
support sai model spec
2023-08-06 21:50:05 +09:00
Kohya S
0636399c8c
add adding v-pred like loss for noise pred
2023-07-31 08:23:28 +09:00
Kohya S
8ba02ac829
fix to work text encoder only network with bf16
2023-07-22 09:56:36 +09:00
Kohya S
73a08c0be0
Merge pull request #630 from ddPn08/sdxl
...
make tracker init_kwargs configurable
2023-07-20 22:05:55 +09:00
Kohya S
acf16c063a
make to work with PyTorch 1.12
2023-07-20 21:41:16 +09:00
Kohya S
225e871819
enable full bf16 trainint in train_network
2023-07-19 08:41:42 +09:00
Kohya S
7875ca8fb5
Merge pull request #645 from Ttl/prepare_order
...
Cast weights to correct precision before transferring them to GPU
2023-07-19 08:33:32 +09:00
Kohya S
6d2d8dfd2f
add zero_terminal_snr option
2023-07-18 23:17:23 +09:00
Henrik Forstén
cdffd19f61
Cast weights to correct precision before transferring them to GPU
2023-07-13 12:45:28 +03:00
ddPn08
b841dd78fe
make tracker init_kwargs configurable
2023-07-11 10:21:45 +09:00
Kohya S
0416f26a76
support multi gpu in caching text encoder outputs
2023-07-09 16:02:56 +09:00
Kohya S
ea182461d3
add min/max_timestep
2023-07-03 20:44:42 +09:00
Kohya S
d395bc0647
fix max_token_length not works for sdxl
2023-06-29 13:02:19 +09:00
Kohya S
9ebebb22db
fix typos
2023-06-26 20:43:34 +09:00
Kohya S
2c461e4ad3
Add no_half_vae for SDXL training, add nan check
2023-06-26 20:38:09 +09:00
Kohya S
747af145ed
add sdxl fine-tuning and LoRA
2023-06-26 08:07:24 +09:00
Kohya S
5114e8daf1
fix training scripts except controlnet not working
2023-06-22 08:46:53 +09:00
Kohya S
92e50133f8
Merge branch 'original-u-net' into dev
2023-06-17 21:57:08 +09:00
Kohya S
19dfa24abb
Merge branch 'main' into original-u-net
2023-06-16 20:59:34 +09:00
青龍聖者@bdsqlsz
e97d67a681
Support for Prodigy(Dadapt variety for Dylora) ( #585 )
...
* Update train_util.py for DAdaptLion
* Update train_README-zh.md for dadaptlion
* Update train_README-ja.md for DAdaptLion
* add DAdatpt V3
* Alignment
* Update train_util.py for experimental
* Update train_util.py V3
* Update train_README-zh.md
* Update train_README-ja.md
* Update train_util.py fix
* Update train_util.py
* support Prodigy
* add lower
2023-06-15 21:12:53 +09:00
Kohya S
9806b00f74
add arbitrary dataset feature to each script
2023-06-15 20:39:39 +09:00
Kohya S
9aee793078
support arbitrary dataset for train_network.py
2023-06-14 12:49:12 +09:00
ykume
9e1683cf2b
support sdpa
2023-06-11 21:26:15 +09:00
ykume
0315611b11
remove workaround for accelerator=0.15, fix XTI
2023-06-11 18:32:14 +09:00
Kohya S
bb91a10b5f
fix to work LyCORIS<0.1.6
2023-06-06 21:59:57 +09:00
u-haru
5907bbd9de
loss表示追加
2023-06-03 21:20:26 +09:00
Kohya S
5bec05e045
move max_norm to lora to avoid crashing in lycoris
2023-06-03 12:42:32 +09:00
Kohya S
ec2efe52e4
scale v-pred loss like noise pred
2023-06-03 10:52:22 +09:00
ddPn08
c8d209d36c
update diffusers to 1.16 | train_network
2023-06-01 20:39:26 +09:00
Kohya S
f8e8df5a04
fix crash gen script, change to network_dropout
2023-06-01 20:07:04 +09:00