Commit Graph

139 Commits

Author SHA1 Message Date
Kohya S
893c2fc08a add DyLoRA (experimental) 2023-04-12 23:14:09 +09:00
Kohya S
2e9f7b5f91 cache latents to disk in dreambooth method 2023-04-12 23:10:39 +09:00
AI-Casanova
0d54609435 Merge branch 'kohya-ss:main' into weighted_captions 2023-04-07 14:55:40 -05:00
AI-Casanova
7527436549 Merge branch 'kohya-ss:main' into weighted_captions 2023-04-05 17:07:15 -05:00
Kohya S
541539a144 change method name, repo is private in default etc 2023-04-05 23:16:49 +09:00
Kohya S
74220bb52c Merge pull request #348 from ddPn08/dev
Added a function to upload to Huggingface and resume from Huggingface.
2023-04-05 21:47:36 +09:00
Kohya S
76bac2c1c5 add backward compatiblity 2023-04-04 08:27:11 +09:00
Kohya S
0fcdda7175 Merge pull request #373 from rockerBOO/meta-min_snr_gamma
Add min_snr_gamma to metadata
2023-04-04 07:57:50 +09:00
Kohya S
e4eb3e63e6 improve compatibility 2023-04-04 07:48:48 +09:00
rockerBOO
626d4b433a Add min_snr_gamma to metadata 2023-04-03 12:38:20 -04:00
Kohya S
83c7e03d05 Fix network_weights not working in train_network 2023-04-03 22:45:28 +09:00
Kohya S
3beddf341e Suppor LR graphs for each block, base lr 2023-04-03 08:43:11 +09:00
AI-Casanova
1892c82a60 Reinstantiate weighted captions after a necessary revert to Main 2023-04-02 19:43:34 +00:00
ddPn08
16ba1cec69 change async uploading to optional 2023-04-02 17:45:26 +09:00
ddPn08
b5c7937f8d don't run when not needed 2023-04-02 17:39:21 +09:00
ddPn08
b5ff4e816f resume from huggingface repository 2023-04-02 17:39:21 +09:00
ddPn08
a7d302e196 write a random seed to metadata 2023-04-02 17:39:20 +09:00
ddPn08
d42431d73a Added feature to upload to huggingface 2023-04-02 17:39:10 +09:00
u-haru
41ecccb2a9 Merge branch 'kohya-ss:main' into feature/stratified_lr 2023-03-31 12:47:56 +09:00
Kohya S
8cecc676cf Fix device issue in load_file, reduce vram usage 2023-03-31 09:05:51 +09:00
u-haru
4dacc52bde implement stratified_lr 2023-03-31 00:39:35 +09:00
Kohya S
31069e1dc5 add comments about debice for clarify 2023-03-30 21:44:40 +09:00
Kohya S
6c28dfb417 Merge pull request #332 from guaneec/ddp-lowram
Reduce peak RAM usage
2023-03-30 21:37:37 +09:00
Kohya S
4f70e5dca6 fix to work with num_workers=0 2023-03-28 19:42:47 +09:00
guaneec
3cdae0cbd2 Reduce peak RAM usage 2023-03-27 14:34:17 +08:00
Kohya S
6732df93e2 Merge branch 'dev' into min-SNR 2023-03-26 17:10:53 +09:00
Kohya S
4f42f759ea Merge pull request #322 from u-haru/feature/token_warmup
タグ数を徐々に増やしながら学習するオプションの追加、persistent_workersに関する軽微なバグ修正
2023-03-26 17:05:59 +09:00
u-haru
a4b34a9c3c blueprint_args_conflictは不要なため削除、shuffleが毎回行われる不具合修正 2023-03-26 03:26:55 +09:00
u-haru
4dc1124f93 lora以外も対応 2023-03-26 02:19:55 +09:00
u-haru
9c80da6ac5 Merge branch 'feature/token_warmup' of https://github.com/u-haru/sd-scripts into feature/token_warmup 2023-03-26 01:45:15 +09:00
u-haru
292cdb8379 データセットにepoch、stepが通達されないバグ修正 2023-03-26 01:44:25 +09:00
u-haru
5ec90990de データセットにepoch、stepが通達されないバグ修正 2023-03-26 01:41:24 +09:00
AI-Casanova
518a18aeff (ACTUAL) Min-SNR Weighting Strategy: Fixed SNR calculation to authors implementation 2023-03-23 12:34:49 +00:00
u-haru
dbadc40ec2 persistent_workersを有効にした際にキャプションが変化しなくなるバグ修正 2023-03-23 12:33:03 +09:00
u-haru
447c56bf50 typo修正、stepをglobal_stepに修正、バグ修正 2023-03-23 09:53:14 +09:00
u-haru
a9b26b73e0 implement token warmup 2023-03-23 07:37:14 +09:00
AI-Casanova
64c923230e Min-SNR Weighting Strategy: Refactored and added to all trainers 2023-03-22 01:27:29 +00:00
AI-Casanova
795a6bd2d8 Merge branch 'kohya-ss:main' into min-SNR 2023-03-21 13:19:15 -05:00
Kohya S
1645698ec0 Merge pull request #306 from robertsmieja/main
Extract parser setup to helper function
2023-03-21 21:09:23 +09:00
Kohya S
5aa5a07260 Merge pull request #292 from tsukimiya/hotfix/max_train_steps
Fix: simultaneous use of gradient_accumulation_steps and max_train_epochs
2023-03-21 21:02:29 +09:00
Kohya S
1816ac3271 add vae_batch_size option for faster caching 2023-03-21 18:15:57 +09:00
AI-Casanova
a265225972 Min-SNR Weighting Strategy 2023-03-20 22:51:38 +00:00
Robert Smieja
eb66e5ebac Extract parser setup to helper function
- Allows users who `import` the scripts to examine the parser programmatically
2023-03-20 00:06:47 -04:00
tsukimiya
a167a592e2 Fixed an issue where max_train_steps was not set correctly when max_train_epochs was specified and gradient_accumulation_steps was set to 2 or more. 2023-03-19 23:54:38 +09:00
Kohya S
ec7f9bab6c Merge branch 'dev' into dev 2023-03-19 10:25:22 +09:00
Kohya S
83e102c691 refactor config parse, feature to output config 2023-03-19 10:11:11 +09:00
Kohya S
c3f9eb10f1 format with black 2023-03-18 18:58:12 +09:00
Linaqruf
44d4cfb453 feat: added function to load training config with .toml 2023-03-12 11:52:37 +07:00
Kohya S
2652c9a66c Merge pull request #276 from mio2333/main
Append sys path for import_module
2023-03-10 21:43:32 +09:00
Isotr0py
e3b2bb5b80 Merge branch 'dev' into dev 2023-03-10 19:04:07 +08:00