Kohya S
74220bb52c
Merge pull request #348 from ddPn08/dev
...
Added a function to upload to Huggingface and resume from Huggingface.
2023-04-05 21:47:36 +09:00
Kohya S
76bac2c1c5
add backward compatiblity
2023-04-04 08:27:11 +09:00
Kohya S
0fcdda7175
Merge pull request #373 from rockerBOO/meta-min_snr_gamma
...
Add min_snr_gamma to metadata
2023-04-04 07:57:50 +09:00
Kohya S
e4eb3e63e6
improve compatibility
2023-04-04 07:48:48 +09:00
rockerBOO
626d4b433a
Add min_snr_gamma to metadata
2023-04-03 12:38:20 -04:00
Kohya S
83c7e03d05
Fix network_weights not working in train_network
2023-04-03 22:45:28 +09:00
Kohya S
3beddf341e
Suppor LR graphs for each block, base lr
2023-04-03 08:43:11 +09:00
ddPn08
16ba1cec69
change async uploading to optional
2023-04-02 17:45:26 +09:00
ddPn08
b5c7937f8d
don't run when not needed
2023-04-02 17:39:21 +09:00
ddPn08
b5ff4e816f
resume from huggingface repository
2023-04-02 17:39:21 +09:00
ddPn08
a7d302e196
write a random seed to metadata
2023-04-02 17:39:20 +09:00
ddPn08
d42431d73a
Added feature to upload to huggingface
2023-04-02 17:39:10 +09:00
u-haru
41ecccb2a9
Merge branch 'kohya-ss:main' into feature/stratified_lr
2023-03-31 12:47:56 +09:00
Kohya S
8cecc676cf
Fix device issue in load_file, reduce vram usage
2023-03-31 09:05:51 +09:00
u-haru
4dacc52bde
implement stratified_lr
2023-03-31 00:39:35 +09:00
Kohya S
31069e1dc5
add comments about debice for clarify
2023-03-30 21:44:40 +09:00
Kohya S
6c28dfb417
Merge pull request #332 from guaneec/ddp-lowram
...
Reduce peak RAM usage
2023-03-30 21:37:37 +09:00
Kohya S
4f70e5dca6
fix to work with num_workers=0
2023-03-28 19:42:47 +09:00
guaneec
3cdae0cbd2
Reduce peak RAM usage
2023-03-27 14:34:17 +08:00
Kohya S
6732df93e2
Merge branch 'dev' into min-SNR
2023-03-26 17:10:53 +09:00
Kohya S
4f42f759ea
Merge pull request #322 from u-haru/feature/token_warmup
...
タグ数を徐々に増やしながら学習するオプションの追加、persistent_workersに関する軽微なバグ修正
2023-03-26 17:05:59 +09:00
u-haru
a4b34a9c3c
blueprint_args_conflictは不要なため削除、shuffleが毎回行われる不具合修正
2023-03-26 03:26:55 +09:00
u-haru
4dc1124f93
lora以外も対応
2023-03-26 02:19:55 +09:00
u-haru
9c80da6ac5
Merge branch 'feature/token_warmup' of https://github.com/u-haru/sd-scripts into feature/token_warmup
2023-03-26 01:45:15 +09:00
u-haru
292cdb8379
データセットにepoch、stepが通達されないバグ修正
2023-03-26 01:44:25 +09:00
u-haru
5ec90990de
データセットにepoch、stepが通達されないバグ修正
2023-03-26 01:41:24 +09:00
AI-Casanova
518a18aeff
(ACTUAL) Min-SNR Weighting Strategy: Fixed SNR calculation to authors implementation
2023-03-23 12:34:49 +00:00
u-haru
dbadc40ec2
persistent_workersを有効にした際にキャプションが変化しなくなるバグ修正
2023-03-23 12:33:03 +09:00
u-haru
447c56bf50
typo修正、stepをglobal_stepに修正、バグ修正
2023-03-23 09:53:14 +09:00
u-haru
a9b26b73e0
implement token warmup
2023-03-23 07:37:14 +09:00
AI-Casanova
64c923230e
Min-SNR Weighting Strategy: Refactored and added to all trainers
2023-03-22 01:27:29 +00:00
AI-Casanova
795a6bd2d8
Merge branch 'kohya-ss:main' into min-SNR
2023-03-21 13:19:15 -05:00
Kohya S
1645698ec0
Merge pull request #306 from robertsmieja/main
...
Extract parser setup to helper function
2023-03-21 21:09:23 +09:00
Kohya S
5aa5a07260
Merge pull request #292 from tsukimiya/hotfix/max_train_steps
...
Fix: simultaneous use of gradient_accumulation_steps and max_train_epochs
2023-03-21 21:02:29 +09:00
Kohya S
1816ac3271
add vae_batch_size option for faster caching
2023-03-21 18:15:57 +09:00
AI-Casanova
a265225972
Min-SNR Weighting Strategy
2023-03-20 22:51:38 +00:00
Robert Smieja
eb66e5ebac
Extract parser setup to helper function
...
- Allows users who `import` the scripts to examine the parser programmatically
2023-03-20 00:06:47 -04:00
tsukimiya
a167a592e2
Fixed an issue where max_train_steps was not set correctly when max_train_epochs was specified and gradient_accumulation_steps was set to 2 or more.
2023-03-19 23:54:38 +09:00
Kohya S
ec7f9bab6c
Merge branch 'dev' into dev
2023-03-19 10:25:22 +09:00
Kohya S
83e102c691
refactor config parse, feature to output config
2023-03-19 10:11:11 +09:00
Kohya S
c3f9eb10f1
format with black
2023-03-18 18:58:12 +09:00
Linaqruf
44d4cfb453
feat: added function to load training config with .toml
2023-03-12 11:52:37 +07:00
Kohya S
2652c9a66c
Merge pull request #276 from mio2333/main
...
Append sys path for import_module
2023-03-10 21:43:32 +09:00
Isotr0py
e3b2bb5b80
Merge branch 'dev' into dev
2023-03-10 19:04:07 +08:00
Isotr0py
7544b38635
fix multi gpu
2023-03-10 18:45:53 +08:00
mio
68cd874bb6
Append sys path for import_module
...
This will be better if we run the scripts we do not run the training script from the current directory. This is reasonable as some other projects will use this as a subfolder, such as https://github.com/ddPn08/kohya-sd-scripts-webui . I can not run the script without adding this.
2023-03-10 18:29:34 +08:00
Kohya S
458173da5e
Merge branch 'dev' into dev
2023-03-10 13:00:49 +09:00
Isotr0py
eb68892ab1
add lr_scheduler_type etc
2023-03-09 16:51:22 +08:00
Kohya S
3ce846525b
set minimum metadata even with no_metadata
2023-03-08 21:19:12 +09:00
ddPn08
87846c043f
fix for multi gpu training
2023-03-08 09:46:37 +09:00