Commit Graph

1399 Commits

Author SHA1 Message Date
tsukimiya
9d4cf8b03b Merge remote-tracking branch 'origin/hotfix/max_train_steps' into hotfix/max_train_steps
# Conflicts:
#	train_network.py
2023-03-19 23:55:51 +09:00
tsukimiya
a167a592e2 Fixed an issue where max_train_steps was not set correctly when max_train_epochs was specified and gradient_accumulation_steps was set to 2 or more. 2023-03-19 23:54:38 +09:00
Kohya S
432353185c Update README.md v0.5.2 2023-03-19 22:36:46 +09:00
Kohya S
d526f1d3d3 Merge pull request #305 from kohya-ss/dev
config file, lr scheduler, weighted prompt for sample gen etc.
2023-03-19 22:34:15 +09:00
Kohya S
c219600ca0 update readme 2023-03-19 22:32:14 +09:00
Kohya S
de95431895 support win with diffusers, fix extra args eval 2023-03-19 22:09:36 +09:00
Kohya S
c86bf213d1 Merge pull request #290 from orenwang/main
fix exception on training model in diffusers format
2023-03-19 21:59:57 +09:00
Kohya S
48c1be34f3 Merge branch 'dev' into main 2023-03-19 21:58:41 +09:00
Kohya S
140b4fad43 remove default values from output config 2023-03-19 20:06:31 +09:00
Kohya S
1f7babd2c7 Fix lpwp to support sdv2 and clip skip 2023-03-19 11:10:17 +09:00
Kohya S
cfb19ad0da Merge pull request #288 from mio2333/main
sample images with weight and no length limit
2023-03-19 10:57:47 +09:00
Kohya S
1214760cea Merge branch 'dev' into main 2023-03-19 10:56:56 +09:00
Kohya S
64d85b2f51 fix num_processes, fix indent 2023-03-19 10:52:46 +09:00
Kohya S
8f08feb577 Merge pull request #271 from Isotr0py/dev
Add '--lr_scheduler_type' and '--lr_scheduler_args' argument
2023-03-19 10:26:34 +09:00
Kohya S
ec7f9bab6c Merge branch 'dev' into dev 2023-03-19 10:25:22 +09:00
Kohya S
83e102c691 refactor config parse, feature to output config 2023-03-19 10:11:11 +09:00
Kohya S
c3f9eb10f1 format with black 2023-03-18 18:58:12 +09:00
Kohya S
563a4dc897 Merge pull request #241 from Linaqruf/main
Load training arguments from .yaml, and other small changes
2023-03-18 18:50:42 +09:00
orenwang
370ca9e8cd fix exception on training model in diffusers format 2023-03-13 14:32:43 +08:00
tsukimiya
5dad64b684 Fixed an issue where max_train_steps was not set correctly when max_train_epochs was specified and gradient_accumulation_steps was set to 2 or more. 2023-03-13 14:37:28 +09:00
mio
e24a43ae0b sample images with weight and no length limit 2023-03-12 16:08:31 +08:00
Linaqruf
44d4cfb453 feat: added function to load training config with .toml 2023-03-12 11:52:37 +07:00
Kohya S
7c1cf7f4ea Merge pull request #283 from kohya-ss/dev
fix device error
2023-03-11 08:05:30 +09:00
Kohya S
0b38e663fd remove unnecessary device change 2023-03-11 08:04:28 +09:00
Kohya S
8b25929765 fix device error 2023-03-11 08:03:02 +09:00
Kohya S
b80431de30 Merge pull request #278 from kohya-ss/dev
Dev
v0.5.1
2023-03-10 22:05:36 +09:00
Kohya S
b177460807 restore comment 2023-03-10 22:02:17 +09:00
Kohya S
c78c51c78f update documents 2023-03-10 21:59:25 +09:00
Kohya S
2652c9a66c Merge pull request #276 from mio2333/main
Append sys path for import_module
2023-03-10 21:43:32 +09:00
Kohya S
618592c52b npz check to use subset, add dadap warn close #274 2023-03-10 21:31:59 +09:00
Kohya S
75d1883da6 fix LoRA rank is limited to target dim 2023-03-10 21:12:15 +09:00
Kohya S
4ad8e75291 fix to work with dim>320 2023-03-10 21:10:22 +09:00
Kohya S
e355b5e1d3 Merge pull request #269 from rvhfxb/patch-2
Allow to delete images after getting latents
2023-03-10 20:56:11 +09:00
Isotr0py
e3b2bb5b80 Merge branch 'dev' into dev 2023-03-10 19:04:07 +08:00
Isotr0py
7544b38635 fix multi gpu 2023-03-10 18:45:53 +08:00
mio
68cd874bb6 Append sys path for import_module
This will be better if we run the scripts we do not run the training script from the current directory.  This is reasonable as some other projects will use this as a subfolder, such as https://github.com/ddPn08/kohya-sd-scripts-webui. I can not run the script without adding this.
2023-03-10 18:29:34 +08:00
Isotr0py
c4a596df9e replace unsafe eval() with ast 2023-03-10 13:44:16 +08:00
Kohya S
00a9d734d9 Merge pull request #247 from ddPn08/dev
fix for multi gpu training
2023-03-10 13:01:52 +09:00
Kohya S
458173da5e Merge branch 'dev' into dev 2023-03-10 13:00:49 +09:00
Kohya S
1932c31c66 Merge pull request #243 from mgz-dev/dynamic-dim-lora-resize
Enable ability to resize lora dim based off sv ratios
2023-03-10 12:59:39 +09:00
Kohya S
dd05d99efd Merge pull request #272 from kohya-ss/dev
support conv2d-3x3, update documents etc
2023-03-09 21:54:41 +09:00
Kohya S
cf2bc437ec update readme 2023-03-09 21:51:22 +09:00
Kohya S
aa317d4f57 Merge branch 'main' into dev 2023-03-09 20:56:54 +09:00
Kohya S
51249b1ba0 support conv2d 3x3 LoRA 2023-03-09 20:56:33 +09:00
Isotr0py
ab05be11d2 fix wrong typing 2023-03-09 19:35:06 +08:00
Kohya S
e7051d427c fix default conv alpha to 1 2023-03-09 20:26:14 +09:00
Kohya S
b885c6f9d2 disable annoying warning in CLIP loading 2023-03-09 20:25:21 +09:00
Kohya S
ad443e172a fix samle gen failed if use templates 2023-03-09 20:24:53 +09:00
Isotr0py
eb68892ab1 add lr_scheduler_type etc 2023-03-09 16:51:22 +08:00
Kohya S
c4b4d1cb40 fix LoRA always expanded to Conv2d-3x3 2023-03-09 08:47:13 +09:00