AI-Casanova
678fe003e3
Merge branch 'kohya-ss:main' into size-from-weights
2023-05-09 08:30:18 -05:00
Kohya S
09c719c926
add adaptive noise scale
2023-05-07 18:09:08 +09:00
AI-Casanova
76a2b14cdb
Instantiate size_from_weights
2023-05-06 20:06:02 +00:00
Fair
b08154dc36
fix print "saving" and "epoch" in newline
2023-05-07 02:51:01 +08:00
Kohya S
2127907dd3
refactor selection and logging for DAdaptation
2023-05-06 18:14:16 +09:00
青龍聖者@bdsqlsz
164a1978de
Support for more Dadaptation ( #455 )
...
* Update train_util.py for add DAdaptAdan and DAdaptSGD
* Update train_util.py for DAdaptadam
* Update train_network.py for dadapt
* Update train_README-ja.md for DAdapt
* Update train_util.py for DAdapt
* Update train_network.py for DAdaptAdaGrad
* Update train_db.py for DAdapt
* Update fine_tune.py for DAdapt
* Update train_textual_inversion.py for DAdapt
* Update train_textual_inversion_XTI.py for DAdapt
2023-05-06 17:30:09 +09:00
ykume
69579668bb
Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev
2023-05-03 11:17:43 +09:00
Kohya S
2e688b7cd3
Merge pull request #471 from pamparamm/multires-noise
...
Multi-Resolution Noise
2023-05-03 11:17:21 +09:00
ykume
2fcbfec178
make transform_DDP more intuitive
2023-05-03 11:07:29 +09:00
Isotr0py
e1143caf38
Fix DDP issues and Support DDP for all training scripts ( #448 )
...
* Fix DDP bugs
* Fix DDP bugs for finetune and db
* refactor model loader
* fix DDP network
* try to fix DDP network in train unet only
* remove unuse DDP import
* refactor DDP transform
* refactor DDP transform
* fix sample images bugs
* change DDP tranform location
* add autocast to train_db
* support DDP in XTI
* Clear DDP import
2023-05-03 10:37:47 +09:00
Pam
b18d099291
Multi-Resolution Noise
2023-05-02 09:42:17 +05:00
Kohya S
74008ce487
add save_every_n_steps option
2023-04-24 23:22:24 +09:00
Plat
27ffd9fe3d
feat: support wandb logging
2023-04-20 01:41:12 +09:00
Kohya S
893c2fc08a
add DyLoRA (experimental)
2023-04-12 23:14:09 +09:00
Kohya S
2e9f7b5f91
cache latents to disk in dreambooth method
2023-04-12 23:10:39 +09:00
AI-Casanova
0d54609435
Merge branch 'kohya-ss:main' into weighted_captions
2023-04-07 14:55:40 -05:00
AI-Casanova
7527436549
Merge branch 'kohya-ss:main' into weighted_captions
2023-04-05 17:07:15 -05:00
Kohya S
541539a144
change method name, repo is private in default etc
2023-04-05 23:16:49 +09:00
Kohya S
74220bb52c
Merge pull request #348 from ddPn08/dev
...
Added a function to upload to Huggingface and resume from Huggingface.
2023-04-05 21:47:36 +09:00
Kohya S
76bac2c1c5
add backward compatiblity
2023-04-04 08:27:11 +09:00
Kohya S
0fcdda7175
Merge pull request #373 from rockerBOO/meta-min_snr_gamma
...
Add min_snr_gamma to metadata
2023-04-04 07:57:50 +09:00
Kohya S
e4eb3e63e6
improve compatibility
2023-04-04 07:48:48 +09:00
rockerBOO
626d4b433a
Add min_snr_gamma to metadata
2023-04-03 12:38:20 -04:00
Kohya S
83c7e03d05
Fix network_weights not working in train_network
2023-04-03 22:45:28 +09:00
Kohya S
3beddf341e
Suppor LR graphs for each block, base lr
2023-04-03 08:43:11 +09:00
AI-Casanova
1892c82a60
Reinstantiate weighted captions after a necessary revert to Main
2023-04-02 19:43:34 +00:00
ddPn08
16ba1cec69
change async uploading to optional
2023-04-02 17:45:26 +09:00
ddPn08
b5c7937f8d
don't run when not needed
2023-04-02 17:39:21 +09:00
ddPn08
b5ff4e816f
resume from huggingface repository
2023-04-02 17:39:21 +09:00
ddPn08
a7d302e196
write a random seed to metadata
2023-04-02 17:39:20 +09:00
ddPn08
d42431d73a
Added feature to upload to huggingface
2023-04-02 17:39:10 +09:00
u-haru
41ecccb2a9
Merge branch 'kohya-ss:main' into feature/stratified_lr
2023-03-31 12:47:56 +09:00
Kohya S
8cecc676cf
Fix device issue in load_file, reduce vram usage
2023-03-31 09:05:51 +09:00
u-haru
4dacc52bde
implement stratified_lr
2023-03-31 00:39:35 +09:00
Kohya S
31069e1dc5
add comments about debice for clarify
2023-03-30 21:44:40 +09:00
Kohya S
6c28dfb417
Merge pull request #332 from guaneec/ddp-lowram
...
Reduce peak RAM usage
2023-03-30 21:37:37 +09:00
Kohya S
4f70e5dca6
fix to work with num_workers=0
2023-03-28 19:42:47 +09:00
guaneec
3cdae0cbd2
Reduce peak RAM usage
2023-03-27 14:34:17 +08:00
Kohya S
6732df93e2
Merge branch 'dev' into min-SNR
2023-03-26 17:10:53 +09:00
Kohya S
4f42f759ea
Merge pull request #322 from u-haru/feature/token_warmup
...
タグ数を徐々に増やしながら学習するオプションの追加、persistent_workersに関する軽微なバグ修正
2023-03-26 17:05:59 +09:00
u-haru
a4b34a9c3c
blueprint_args_conflictは不要なため削除、shuffleが毎回行われる不具合修正
2023-03-26 03:26:55 +09:00
u-haru
4dc1124f93
lora以外も対応
2023-03-26 02:19:55 +09:00
u-haru
9c80da6ac5
Merge branch 'feature/token_warmup' of https://github.com/u-haru/sd-scripts into feature/token_warmup
2023-03-26 01:45:15 +09:00
u-haru
292cdb8379
データセットにepoch、stepが通達されないバグ修正
2023-03-26 01:44:25 +09:00
u-haru
5ec90990de
データセットにepoch、stepが通達されないバグ修正
2023-03-26 01:41:24 +09:00
AI-Casanova
518a18aeff
(ACTUAL) Min-SNR Weighting Strategy: Fixed SNR calculation to authors implementation
2023-03-23 12:34:49 +00:00
u-haru
dbadc40ec2
persistent_workersを有効にした際にキャプションが変化しなくなるバグ修正
2023-03-23 12:33:03 +09:00
u-haru
447c56bf50
typo修正、stepをglobal_stepに修正、バグ修正
2023-03-23 09:53:14 +09:00
u-haru
a9b26b73e0
implement token warmup
2023-03-23 07:37:14 +09:00
AI-Casanova
64c923230e
Min-SNR Weighting Strategy: Refactored and added to all trainers
2023-03-22 01:27:29 +00:00