Kohya S
ec2efe52e4
scale v-pred loss like noise pred
2023-06-03 10:52:22 +09:00
ddPn08
c8d209d36c
update diffusers to 1.16 | train_network
2023-06-01 20:39:26 +09:00
Kohya S
f8e8df5a04
fix crash gen script, change to network_dropout
2023-06-01 20:07:04 +09:00
Kohya S
f4c9276336
add scaling to max norm
2023-06-01 19:46:17 +09:00
Kohya S
a5c38e5d5b
fix crashing when max_norm is diabled
2023-06-01 19:32:22 +09:00
AI-Casanova
9c7237157d
Dropout and Max Norm Regularization for LoRA training ( #545 )
...
* Instantiate max_norm
* minor
* Move to end of step
* argparse
* metadata
* phrasing
* Sqrt ratio and logging
* fix logging
* Dropout test
* Dropout Args
* Dropout changed to affect LoRA only
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com >
2023-06-01 14:58:38 +09:00
TingTingin
5931948adb
Adjusted English grammar in logs to be more clear ( #554 )
...
* Update train_network.py
* Update train_network.py
* Update train_network.py
* Update train_network.py
* Update train_network.py
* Update train_network.py
2023-06-01 12:31:33 +09:00
Kohya S
6fbd526931
show multiplier for base weights to console
2023-05-31 20:23:19 +09:00
Kohya S
c437dce056
change option name for merging network weights
2023-05-30 23:19:29 +09:00
Kohya S
fc00691898
enable multiple module weights
2023-05-30 23:10:41 +09:00
u-haru
dd8e17cb37
差分学習機能追加
2023-05-27 05:15:02 +09:00
Kohya S
3699a90645
add adaptive noise scale to metadata
2023-05-15 23:18:16 +09:00
Kohya S
7889a52f95
add callback for step start
2023-05-11 22:00:41 +09:00
Kohya S
2767a0f9f2
common block lr args processing in create
2023-05-11 21:47:59 +09:00
Kohya S
dfc56e9227
Merge branch 'main' into dev
2023-05-11 21:12:33 +09:00
Kohya S
968bbd2f47
Merge pull request #480 from yanhuifair/main
...
fix print "saving" and "epoch" in newline
2023-05-11 21:05:37 +09:00
AI-Casanova
678fe003e3
Merge branch 'kohya-ss:main' into size-from-weights
2023-05-09 08:30:18 -05:00
Kohya S
09c719c926
add adaptive noise scale
2023-05-07 18:09:08 +09:00
AI-Casanova
76a2b14cdb
Instantiate size_from_weights
2023-05-06 20:06:02 +00:00
Fair
b08154dc36
fix print "saving" and "epoch" in newline
2023-05-07 02:51:01 +08:00
Kohya S
2127907dd3
refactor selection and logging for DAdaptation
2023-05-06 18:14:16 +09:00
青龍聖者@bdsqlsz
164a1978de
Support for more Dadaptation ( #455 )
...
* Update train_util.py for add DAdaptAdan and DAdaptSGD
* Update train_util.py for DAdaptadam
* Update train_network.py for dadapt
* Update train_README-ja.md for DAdapt
* Update train_util.py for DAdapt
* Update train_network.py for DAdaptAdaGrad
* Update train_db.py for DAdapt
* Update fine_tune.py for DAdapt
* Update train_textual_inversion.py for DAdapt
* Update train_textual_inversion_XTI.py for DAdapt
2023-05-06 17:30:09 +09:00
ykume
69579668bb
Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev
2023-05-03 11:17:43 +09:00
Kohya S
2e688b7cd3
Merge pull request #471 from pamparamm/multires-noise
...
Multi-Resolution Noise
2023-05-03 11:17:21 +09:00
ykume
2fcbfec178
make transform_DDP more intuitive
2023-05-03 11:07:29 +09:00
Isotr0py
e1143caf38
Fix DDP issues and Support DDP for all training scripts ( #448 )
...
* Fix DDP bugs
* Fix DDP bugs for finetune and db
* refactor model loader
* fix DDP network
* try to fix DDP network in train unet only
* remove unuse DDP import
* refactor DDP transform
* refactor DDP transform
* fix sample images bugs
* change DDP tranform location
* add autocast to train_db
* support DDP in XTI
* Clear DDP import
2023-05-03 10:37:47 +09:00
Pam
b18d099291
Multi-Resolution Noise
2023-05-02 09:42:17 +05:00
Kohya S
74008ce487
add save_every_n_steps option
2023-04-24 23:22:24 +09:00
Plat
27ffd9fe3d
feat: support wandb logging
2023-04-20 01:41:12 +09:00
Kohya S
893c2fc08a
add DyLoRA (experimental)
2023-04-12 23:14:09 +09:00
Kohya S
2e9f7b5f91
cache latents to disk in dreambooth method
2023-04-12 23:10:39 +09:00
AI-Casanova
0d54609435
Merge branch 'kohya-ss:main' into weighted_captions
2023-04-07 14:55:40 -05:00
AI-Casanova
7527436549
Merge branch 'kohya-ss:main' into weighted_captions
2023-04-05 17:07:15 -05:00
Kohya S
541539a144
change method name, repo is private in default etc
2023-04-05 23:16:49 +09:00
Kohya S
74220bb52c
Merge pull request #348 from ddPn08/dev
...
Added a function to upload to Huggingface and resume from Huggingface.
2023-04-05 21:47:36 +09:00
Kohya S
76bac2c1c5
add backward compatiblity
2023-04-04 08:27:11 +09:00
Kohya S
0fcdda7175
Merge pull request #373 from rockerBOO/meta-min_snr_gamma
...
Add min_snr_gamma to metadata
2023-04-04 07:57:50 +09:00
Kohya S
e4eb3e63e6
improve compatibility
2023-04-04 07:48:48 +09:00
rockerBOO
626d4b433a
Add min_snr_gamma to metadata
2023-04-03 12:38:20 -04:00
Kohya S
83c7e03d05
Fix network_weights not working in train_network
2023-04-03 22:45:28 +09:00
Kohya S
3beddf341e
Suppor LR graphs for each block, base lr
2023-04-03 08:43:11 +09:00
AI-Casanova
1892c82a60
Reinstantiate weighted captions after a necessary revert to Main
2023-04-02 19:43:34 +00:00
ddPn08
16ba1cec69
change async uploading to optional
2023-04-02 17:45:26 +09:00
ddPn08
b5c7937f8d
don't run when not needed
2023-04-02 17:39:21 +09:00
ddPn08
b5ff4e816f
resume from huggingface repository
2023-04-02 17:39:21 +09:00
ddPn08
a7d302e196
write a random seed to metadata
2023-04-02 17:39:20 +09:00
ddPn08
d42431d73a
Added feature to upload to huggingface
2023-04-02 17:39:10 +09:00
u-haru
41ecccb2a9
Merge branch 'kohya-ss:main' into feature/stratified_lr
2023-03-31 12:47:56 +09:00
Kohya S
8cecc676cf
Fix device issue in load_file, reduce vram usage
2023-03-31 09:05:51 +09:00
u-haru
4dacc52bde
implement stratified_lr
2023-03-31 00:39:35 +09:00