Yuta Hayashibe
3d2bb1a8f1
Add LossRecorder and use moving average in all places
2023-10-27 17:49:49 +09:00
青龍聖者@bdsqlsz
202f2c3292
Debias Estimation loss ( #889 )
...
* update for bnb 0.41.1
* fixed generate_controlnet_subsets_config for training
* Revert "update for bnb 0.41.1"
This reverts commit 70bd3612d8 .
* add debiased_estimation_loss
* add train_network
* Revert "add train_network"
This reverts commit 6539363c5c .
* Update train_network.py
2023-10-23 22:59:14 +09:00
Kohya S
025368f51c
may work dropout in LyCORIS #859
2023-10-09 14:06:58 +09:00
Yuta Hayashibe
27f9b6ffeb
updated typos to v1.16.15 and fix typos
2023-10-01 21:51:24 +09:00
Kohya S
4cc919607a
fix placing of requires_grad_ of U-Net
2023-10-01 16:41:48 +09:00
Kohya S
81419f7f32
Fix to work training U-Net only LoRA for SD1/2
2023-10-01 16:37:23 +09:00
Kohya S
d39f1a3427
Merge pull request #808 from rockerBOO/metadata
...
Add ip_noise_gamma metadata
2023-09-24 14:35:18 +09:00
Disty0
b64389c8a9
Intel ARC support with IPEX
2023-09-19 18:05:05 +03:00
rockerBOO
80aca1ccc7
Add ip_noise_gamma metadata
2023-09-05 15:20:15 -04:00
Kohya S
c142dadb46
support sai model spec
2023-08-06 21:50:05 +09:00
Kohya S
0636399c8c
add adding v-pred like loss for noise pred
2023-07-31 08:23:28 +09:00
Kohya S
8ba02ac829
fix to work text encoder only network with bf16
2023-07-22 09:56:36 +09:00
Kohya S
73a08c0be0
Merge pull request #630 from ddPn08/sdxl
...
make tracker init_kwargs configurable
2023-07-20 22:05:55 +09:00
Kohya S
acf16c063a
make to work with PyTorch 1.12
2023-07-20 21:41:16 +09:00
Kohya S
225e871819
enable full bf16 trainint in train_network
2023-07-19 08:41:42 +09:00
Kohya S
7875ca8fb5
Merge pull request #645 from Ttl/prepare_order
...
Cast weights to correct precision before transferring them to GPU
2023-07-19 08:33:32 +09:00
Kohya S
6d2d8dfd2f
add zero_terminal_snr option
2023-07-18 23:17:23 +09:00
Henrik Forstén
cdffd19f61
Cast weights to correct precision before transferring them to GPU
2023-07-13 12:45:28 +03:00
ddPn08
b841dd78fe
make tracker init_kwargs configurable
2023-07-11 10:21:45 +09:00
Kohya S
0416f26a76
support multi gpu in caching text encoder outputs
2023-07-09 16:02:56 +09:00
Kohya S
ea182461d3
add min/max_timestep
2023-07-03 20:44:42 +09:00
Kohya S
d395bc0647
fix max_token_length not works for sdxl
2023-06-29 13:02:19 +09:00
Kohya S
9ebebb22db
fix typos
2023-06-26 20:43:34 +09:00
Kohya S
2c461e4ad3
Add no_half_vae for SDXL training, add nan check
2023-06-26 20:38:09 +09:00
Kohya S
747af145ed
add sdxl fine-tuning and LoRA
2023-06-26 08:07:24 +09:00
Kohya S
5114e8daf1
fix training scripts except controlnet not working
2023-06-22 08:46:53 +09:00
Kohya S
92e50133f8
Merge branch 'original-u-net' into dev
2023-06-17 21:57:08 +09:00
Kohya S
19dfa24abb
Merge branch 'main' into original-u-net
2023-06-16 20:59:34 +09:00
青龍聖者@bdsqlsz
e97d67a681
Support for Prodigy(Dadapt variety for Dylora) ( #585 )
...
* Update train_util.py for DAdaptLion
* Update train_README-zh.md for dadaptlion
* Update train_README-ja.md for DAdaptLion
* add DAdatpt V3
* Alignment
* Update train_util.py for experimental
* Update train_util.py V3
* Update train_README-zh.md
* Update train_README-ja.md
* Update train_util.py fix
* Update train_util.py
* support Prodigy
* add lower
2023-06-15 21:12:53 +09:00
Kohya S
9806b00f74
add arbitrary dataset feature to each script
2023-06-15 20:39:39 +09:00
Kohya S
9aee793078
support arbitrary dataset for train_network.py
2023-06-14 12:49:12 +09:00
ykume
9e1683cf2b
support sdpa
2023-06-11 21:26:15 +09:00
ykume
0315611b11
remove workaround for accelerator=0.15, fix XTI
2023-06-11 18:32:14 +09:00
Kohya S
bb91a10b5f
fix to work LyCORIS<0.1.6
2023-06-06 21:59:57 +09:00
u-haru
5907bbd9de
loss表示追加
2023-06-03 21:20:26 +09:00
Kohya S
5bec05e045
move max_norm to lora to avoid crashing in lycoris
2023-06-03 12:42:32 +09:00
Kohya S
ec2efe52e4
scale v-pred loss like noise pred
2023-06-03 10:52:22 +09:00
ddPn08
c8d209d36c
update diffusers to 1.16 | train_network
2023-06-01 20:39:26 +09:00
Kohya S
f8e8df5a04
fix crash gen script, change to network_dropout
2023-06-01 20:07:04 +09:00
Kohya S
f4c9276336
add scaling to max norm
2023-06-01 19:46:17 +09:00
Kohya S
a5c38e5d5b
fix crashing when max_norm is diabled
2023-06-01 19:32:22 +09:00
AI-Casanova
9c7237157d
Dropout and Max Norm Regularization for LoRA training ( #545 )
...
* Instantiate max_norm
* minor
* Move to end of step
* argparse
* metadata
* phrasing
* Sqrt ratio and logging
* fix logging
* Dropout test
* Dropout Args
* Dropout changed to affect LoRA only
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com >
2023-06-01 14:58:38 +09:00
TingTingin
5931948adb
Adjusted English grammar in logs to be more clear ( #554 )
...
* Update train_network.py
* Update train_network.py
* Update train_network.py
* Update train_network.py
* Update train_network.py
* Update train_network.py
2023-06-01 12:31:33 +09:00
Kohya S
6fbd526931
show multiplier for base weights to console
2023-05-31 20:23:19 +09:00
Kohya S
c437dce056
change option name for merging network weights
2023-05-30 23:19:29 +09:00
Kohya S
fc00691898
enable multiple module weights
2023-05-30 23:10:41 +09:00
u-haru
dd8e17cb37
差分学習機能追加
2023-05-27 05:15:02 +09:00
Kohya S
3699a90645
add adaptive noise scale to metadata
2023-05-15 23:18:16 +09:00
Kohya S
7889a52f95
add callback for step start
2023-05-11 22:00:41 +09:00
Kohya S
2767a0f9f2
common block lr args processing in create
2023-05-11 21:47:59 +09:00