Commit Graph

95 Commits

Author SHA1 Message Date
Symbiomatrix
9d678a6f41 Update resize_lora.py 2023-08-16 00:08:09 +03:00
Kohya S
bb91a10b5f fix to work LyCORIS<0.1.6 2023-06-06 21:59:57 +09:00
Kohya S
5bec05e045 move max_norm to lora to avoid crashing in lycoris 2023-06-03 12:42:32 +09:00
Kohya S
0f0158ddaa scale in rank dropout, check training in dropout 2023-06-02 07:29:59 +09:00
Kohya S
dde7807b00 add rank dropout/module dropout 2023-06-01 22:21:36 +09:00
Kohya S
f8e8df5a04 fix crash gen script, change to network_dropout 2023-06-01 20:07:04 +09:00
AI-Casanova
9c7237157d Dropout and Max Norm Regularization for LoRA training (#545)
* Instantiate max_norm

* minor

* Move to end of step

* argparse

* metadata

* phrasing

* Sqrt ratio and logging

* fix logging

* Dropout test

* Dropout Args

* Dropout changed to affect LoRA only

---------

Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com>
2023-06-01 14:58:38 +09:00
琴动我心
f7a1868fc2 fix: support old LoRA without alpha raise "TypeError: argument of type 'int' is not iterable " 2023-05-22 17:15:51 +08:00
Kohya S
2767a0f9f2 common block lr args processing in create 2023-05-11 21:47:59 +09:00
Kohya S
af08c56ce0 remove unnecessary newline 2023-05-11 21:20:18 +09:00
Kohya S
968bbd2f47 Merge pull request #480 from yanhuifair/main
fix print "saving" and "epoch" in newline
2023-05-11 21:05:37 +09:00
Kohya S
fdbdb4748a pre calc LoRA in generating 2023-05-07 09:57:54 +09:00
Fair
b08154dc36 fix print "saving" and "epoch" in newline 2023-05-07 02:51:01 +08:00
Isotr0py
e1143caf38 Fix DDP issues and Support DDP for all training scripts (#448)
* Fix DDP bugs

* Fix DDP bugs for finetune and db

* refactor model loader

* fix DDP network

* try to fix DDP network in train unet only

* remove unuse DDP import

* refactor DDP transform

* refactor DDP transform

* fix sample images bugs

* change DDP tranform location

* add autocast to train_db

* support DDP in XTI

* Clear DDP import
2023-05-03 10:37:47 +09:00
Kohya S
314a364f61 restore sd_model arg for backward compat 2023-04-19 21:11:12 +09:00
Kohya S
f770cd96c6 Merge pull request #392 from A2va/fix
Lora interrogator fixes
2023-04-19 20:28:27 +09:00
A2va
87163cff8b Fix missing pretrained_model_name_or_path 2023-04-17 09:16:07 +02:00
Kohya S
92332eb96e fix load_state_dict failed in dylora 2023-04-14 22:13:26 +09:00
Kohya S
2de9a51591 fix typos 2023-04-13 21:18:18 +09:00
Kohya S
9ff32fd4c0 fix parameters are not freezed 2023-04-13 21:14:20 +09:00
Kohya S
68e0767404 add comment about scaling 2023-04-12 23:40:10 +09:00
Kohya S
e09966024c delete unnecessary lines 2023-04-12 23:16:47 +09:00
Kohya S
893c2fc08a add DyLoRA (experimental) 2023-04-12 23:14:09 +09:00
A2va
683680e5c8 Fixes 2023-04-09 21:52:02 +02:00
Kohya S
5c020bed49 Add attension couple+reginal LoRA 2023-04-06 08:11:54 +09:00
Kohya S
83c7e03d05 Fix network_weights not working in train_network 2023-04-03 22:45:28 +09:00
Kohya S
6134619998 Add block dim(rank) feature 2023-04-03 21:19:49 +09:00
Kohya S
3beddf341e Suppor LR graphs for each block, base lr 2023-04-03 08:43:11 +09:00
Kohya S
c639cb7d5d support older type hint 2023-04-02 16:18:04 +09:00
Kohya S
97e65bf93f change 'stratify' to 'block', add en message 2023-04-02 16:10:09 +09:00
u-haru
19340d82e6 層別学習率を使わない場合にparamsをまとめる 2023-04-02 12:57:55 +09:00
u-haru
058e442072 レイヤー数変更(hako-mikan/sd-webui-lora-block-weight参考) 2023-04-02 04:02:34 +09:00
u-haru
786971d443 Merge branch 'dev' into feature/stratified_lr 2023-04-01 15:08:41 +09:00
Kohya S
770a56193e fix conv2d3x3 is not merged 2023-04-01 09:17:37 +09:00
Kohya S
1cd07770a4 format by black 2023-04-01 09:13:47 +09:00
u-haru
3032a47af4 cosineをsineのreversedに変更 2023-03-31 01:42:57 +09:00
u-haru
1b75dbd4f2 引数名に_lrを追加 2023-03-31 01:40:29 +09:00
u-haru
dade23a414 stratified_zero_thresholdに変更 2023-03-31 01:14:03 +09:00
u-haru
4dacc52bde implement stratified_lr 2023-03-31 00:39:35 +09:00
Kohya S
2d6faa9860 support LoRA merge in advance 2023-03-30 21:34:36 +09:00
Kohya S
bf3674c1db format by black 2023-03-29 21:23:27 +09:00
mgz-dev
c9b157b536 update resize_lora.py (fix out of bounds and index)
Fix error where index may go out of bounds when using certain dynamic parameters.

Fix index and rank issue (previously some parts of code was incorrectly using python index position rather than rank, which is -1 dim).
2023-03-25 19:56:14 -05:00
Kohya S
193674e16c fix to support dynamic rank/alpha 2023-03-21 21:59:51 +09:00
Robert Smieja
eb66e5ebac Extract parser setup to helper function
- Allows users who `import` the scripts to examine the parser programmatically
2023-03-20 00:06:47 -04:00
Kohya S
0b38e663fd remove unnecessary device change 2023-03-11 08:04:28 +09:00
Kohya S
8b25929765 fix device error 2023-03-11 08:03:02 +09:00
Kohya S
b177460807 restore comment 2023-03-10 22:02:17 +09:00
Kohya S
75d1883da6 fix LoRA rank is limited to target dim 2023-03-10 21:12:15 +09:00
Kohya S
4ad8e75291 fix to work with dim>320 2023-03-10 21:10:22 +09:00
Kohya S
1932c31c66 Merge pull request #243 from mgz-dev/dynamic-dim-lora-resize
Enable ability to resize lora dim based off sv ratios
2023-03-10 12:59:39 +09:00