Kohya S
3f7235c36f
add lora controlnet train/gen temporarily
2023-08-17 10:08:02 +09:00
Kohya S
983698dd1b
add lora controlnet temporarily
2023-08-15 18:23:22 +09:00
Kohya S
d713e4c757
add lora_fa experimentally
2023-08-13 13:30:34 +09:00
Kohya S
b83ce0c352
modify import #368
2023-08-08 21:09:08 +09:00
Kohya S
92f41f1051
update sdxl ver in lora metadata from v0-9 to v1-0
2023-08-06 22:06:48 +09:00
Kohya S
c142dadb46
support sai model spec
2023-08-06 21:50:05 +09:00
Kohya S
db80c5a2e7
format by black
2023-08-03 20:14:04 +09:00
Kohya S
2b969e9c42
support sdxl
2023-07-24 22:20:21 +09:00
Kohya S
e83ee217d3
format by black
2023-07-24 21:28:37 +09:00
Kohya S
c1d5c24bc7
fix LoRA with text encoder can't merge closes #660
2023-07-23 15:01:41 +09:00
Kohya S
7e20c6d1a1
add convenience function to merge LoRA
2023-07-17 10:30:57 +09:00
Kohya S
1d4672d747
fix typos
2023-07-17 09:05:50 +09:00
Kohya S
39e62b948e
add lora for Diffusers
2023-07-16 19:57:21 +09:00
Kohya S
66c03be45f
Fix TE key names for SD1/2 LoRA are invalid
2023-07-08 09:56:38 +09:00
Kohya S
a751dc25d6
use CLIPTextModelWithProjection
2023-06-27 20:48:06 +09:00
Kohya S
747af145ed
add sdxl fine-tuning and LoRA
2023-06-26 08:07:24 +09:00
Kohya S
92e50133f8
Merge branch 'original-u-net' into dev
2023-06-17 21:57:08 +09:00
Kohya S
bb91a10b5f
fix to work LyCORIS<0.1.6
2023-06-06 21:59:57 +09:00
Kohya S
5bec05e045
move max_norm to lora to avoid crashing in lycoris
2023-06-03 12:42:32 +09:00
Kohya S
0f0158ddaa
scale in rank dropout, check training in dropout
2023-06-02 07:29:59 +09:00
Kohya S
dde7807b00
add rank dropout/module dropout
2023-06-01 22:21:36 +09:00
ddPn08
e743ee5d5c
update diffusers to 1.16 | dylora
2023-06-01 20:39:30 +09:00
ddPn08
c8d209d36c
update diffusers to 1.16 | train_network
2023-06-01 20:39:26 +09:00
Kohya S
f8e8df5a04
fix crash gen script, change to network_dropout
2023-06-01 20:07:04 +09:00
AI-Casanova
9c7237157d
Dropout and Max Norm Regularization for LoRA training ( #545 )
...
* Instantiate max_norm
* minor
* Move to end of step
* argparse
* metadata
* phrasing
* Sqrt ratio and logging
* fix logging
* Dropout test
* Dropout Args
* Dropout changed to affect LoRA only
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com >
2023-06-01 14:58:38 +09:00
琴动我心
f7a1868fc2
fix: support old LoRA without alpha raise "TypeError: argument of type 'int' is not iterable "
2023-05-22 17:15:51 +08:00
Kohya S
2767a0f9f2
common block lr args processing in create
2023-05-11 21:47:59 +09:00
Kohya S
af08c56ce0
remove unnecessary newline
2023-05-11 21:20:18 +09:00
Kohya S
968bbd2f47
Merge pull request #480 from yanhuifair/main
...
fix print "saving" and "epoch" in newline
2023-05-11 21:05:37 +09:00
Kohya S
fdbdb4748a
pre calc LoRA in generating
2023-05-07 09:57:54 +09:00
Fair
b08154dc36
fix print "saving" and "epoch" in newline
2023-05-07 02:51:01 +08:00
Isotr0py
e1143caf38
Fix DDP issues and Support DDP for all training scripts ( #448 )
...
* Fix DDP bugs
* Fix DDP bugs for finetune and db
* refactor model loader
* fix DDP network
* try to fix DDP network in train unet only
* remove unuse DDP import
* refactor DDP transform
* refactor DDP transform
* fix sample images bugs
* change DDP tranform location
* add autocast to train_db
* support DDP in XTI
* Clear DDP import
2023-05-03 10:37:47 +09:00
Kohya S
314a364f61
restore sd_model arg for backward compat
2023-04-19 21:11:12 +09:00
Kohya S
f770cd96c6
Merge pull request #392 from A2va/fix
...
Lora interrogator fixes
2023-04-19 20:28:27 +09:00
A2va
87163cff8b
Fix missing pretrained_model_name_or_path
2023-04-17 09:16:07 +02:00
Kohya S
92332eb96e
fix load_state_dict failed in dylora
2023-04-14 22:13:26 +09:00
Kohya S
2de9a51591
fix typos
2023-04-13 21:18:18 +09:00
Kohya S
9ff32fd4c0
fix parameters are not freezed
2023-04-13 21:14:20 +09:00
Kohya S
68e0767404
add comment about scaling
2023-04-12 23:40:10 +09:00
Kohya S
e09966024c
delete unnecessary lines
2023-04-12 23:16:47 +09:00
Kohya S
893c2fc08a
add DyLoRA (experimental)
2023-04-12 23:14:09 +09:00
A2va
683680e5c8
Fixes
2023-04-09 21:52:02 +02:00
Kohya S
5c020bed49
Add attension couple+reginal LoRA
2023-04-06 08:11:54 +09:00
Kohya S
83c7e03d05
Fix network_weights not working in train_network
2023-04-03 22:45:28 +09:00
Kohya S
6134619998
Add block dim(rank) feature
2023-04-03 21:19:49 +09:00
Kohya S
3beddf341e
Suppor LR graphs for each block, base lr
2023-04-03 08:43:11 +09:00
Kohya S
c639cb7d5d
support older type hint
2023-04-02 16:18:04 +09:00
Kohya S
97e65bf93f
change 'stratify' to 'block', add en message
2023-04-02 16:10:09 +09:00
u-haru
19340d82e6
層別学習率を使わない場合にparamsをまとめる
2023-04-02 12:57:55 +09:00
u-haru
058e442072
レイヤー数変更(hako-mikan/sd-webui-lora-block-weight参考)
2023-04-02 04:02:34 +09:00