Kohya S
|
4f42f759ea
|
Merge pull request #322 from u-haru/feature/token_warmup
タグ数を徐々に増やしながら学習するオプションの追加、persistent_workersに関する軽微なバグ修正
|
2023-03-26 17:05:59 +09:00 |
|
u-haru
|
a4b34a9c3c
|
blueprint_args_conflictは不要なため削除、shuffleが毎回行われる不具合修正
|
2023-03-26 03:26:55 +09:00 |
|
u-haru
|
5a3d564a30
|
print削除
|
2023-03-26 02:26:08 +09:00 |
|
u-haru
|
4dc1124f93
|
lora以外も対応
|
2023-03-26 02:19:55 +09:00 |
|
u-haru
|
9c80da6ac5
|
Merge branch 'feature/token_warmup' of https://github.com/u-haru/sd-scripts into feature/token_warmup
|
2023-03-26 01:45:15 +09:00 |
|
u-haru
|
292cdb8379
|
データセットにepoch、stepが通達されないバグ修正
|
2023-03-26 01:44:25 +09:00 |
|
u-haru
|
5ec90990de
|
データセットにepoch、stepが通達されないバグ修正
|
2023-03-26 01:41:24 +09:00 |
|
Kohya S
|
e203270e31
|
support TI embeds trained by WebUI(?)
|
2023-03-24 20:46:42 +09:00 |
|
Kohya S
|
b2c5b96f2a
|
format by black
|
2023-03-24 20:19:05 +09:00 |
|
u-haru
|
1b89b2a10e
|
シャッフル前にタグを切り詰めるように変更
|
2023-03-24 13:44:30 +09:00 |
|
u-haru
|
143c26e552
|
競合時にpersistant_data_loader側を無効にするように変更
|
2023-03-24 13:08:56 +09:00 |
|
u-haru
|
dbadc40ec2
|
persistent_workersを有効にした際にキャプションが変化しなくなるバグ修正
|
2023-03-23 12:33:03 +09:00 |
|
u-haru
|
447c56bf50
|
typo修正、stepをglobal_stepに修正、バグ修正
|
2023-03-23 09:53:14 +09:00 |
|
u-haru
|
a9b26b73e0
|
implement token warmup
|
2023-03-23 07:37:14 +09:00 |
|
Kohya S
|
aee343a9ee
|
Merge pull request #310 from kohya-ss/dev
faster latents caching etc.
|
2023-03-21 22:19:26 +09:00 |
|
Kohya S
|
2c5949c155
|
update readme
|
2023-03-21 22:17:20 +09:00 |
|
Kohya S
|
193674e16c
|
fix to support dynamic rank/alpha
|
2023-03-21 21:59:51 +09:00 |
|
Kohya S
|
4f92b6266c
|
fix do not starting script
|
2023-03-21 21:29:10 +09:00 |
|
Kohya S
|
2d86f63e15
|
update steps calc with max_train_epochs
|
2023-03-21 21:21:12 +09:00 |
|
Kohya S
|
88751f58f6
|
Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev
|
2023-03-21 21:10:44 +09:00 |
|
Kohya S
|
7b324bcc3b
|
support extensions of image files with uppercases
|
2023-03-21 21:10:34 +09:00 |
|
Kohya S
|
1645698ec0
|
Merge pull request #306 from robertsmieja/main
Extract parser setup to helper function
|
2023-03-21 21:09:23 +09:00 |
|
Kohya S
|
5aa5a07260
|
Merge pull request #292 from tsukimiya/hotfix/max_train_steps
Fix: simultaneous use of gradient_accumulation_steps and max_train_epochs
|
2023-03-21 21:02:29 +09:00 |
|
Kohya S
|
6d9f3bc0b2
|
fix different reso in batch
|
2023-03-21 18:33:46 +09:00 |
|
Kohya S
|
1816ac3271
|
add vae_batch_size option for faster caching
|
2023-03-21 18:15:57 +09:00 |
|
Kohya S
|
cca3804503
|
Merge branch 'main' into dev
|
2023-03-21 15:05:41 +09:00 |
|
Kohya S
|
cb08fa0379
|
fix no npz with full path
|
2023-03-21 15:05:25 +09:00 |
|
Robert Smieja
|
eb66e5ebac
|
Extract parser setup to helper function
- Allows users who `import` the scripts to examine the parser programmatically
|
2023-03-20 00:06:47 -04:00 |
|
tsukimiya
|
9d4cf8b03b
|
Merge remote-tracking branch 'origin/hotfix/max_train_steps' into hotfix/max_train_steps
# Conflicts:
# train_network.py
|
2023-03-19 23:55:51 +09:00 |
|
tsukimiya
|
a167a592e2
|
Fixed an issue where max_train_steps was not set correctly when max_train_epochs was specified and gradient_accumulation_steps was set to 2 or more.
|
2023-03-19 23:54:38 +09:00 |
|
Kohya S
|
432353185c
|
Update README.md
v0.5.2
|
2023-03-19 22:36:46 +09:00 |
|
Kohya S
|
d526f1d3d3
|
Merge pull request #305 from kohya-ss/dev
config file, lr scheduler, weighted prompt for sample gen etc.
|
2023-03-19 22:34:15 +09:00 |
|
Kohya S
|
c219600ca0
|
update readme
|
2023-03-19 22:32:14 +09:00 |
|
Kohya S
|
de95431895
|
support win with diffusers, fix extra args eval
|
2023-03-19 22:09:36 +09:00 |
|
Kohya S
|
c86bf213d1
|
Merge pull request #290 from orenwang/main
fix exception on training model in diffusers format
|
2023-03-19 21:59:57 +09:00 |
|
Kohya S
|
48c1be34f3
|
Merge branch 'dev' into main
|
2023-03-19 21:58:41 +09:00 |
|
Kohya S
|
140b4fad43
|
remove default values from output config
|
2023-03-19 20:06:31 +09:00 |
|
Kohya S
|
1f7babd2c7
|
Fix lpwp to support sdv2 and clip skip
|
2023-03-19 11:10:17 +09:00 |
|
Kohya S
|
cfb19ad0da
|
Merge pull request #288 from mio2333/main
sample images with weight and no length limit
|
2023-03-19 10:57:47 +09:00 |
|
Kohya S
|
1214760cea
|
Merge branch 'dev' into main
|
2023-03-19 10:56:56 +09:00 |
|
Kohya S
|
64d85b2f51
|
fix num_processes, fix indent
|
2023-03-19 10:52:46 +09:00 |
|
Kohya S
|
8f08feb577
|
Merge pull request #271 from Isotr0py/dev
Add '--lr_scheduler_type' and '--lr_scheduler_args' argument
|
2023-03-19 10:26:34 +09:00 |
|
Kohya S
|
ec7f9bab6c
|
Merge branch 'dev' into dev
|
2023-03-19 10:25:22 +09:00 |
|
Kohya S
|
83e102c691
|
refactor config parse, feature to output config
|
2023-03-19 10:11:11 +09:00 |
|
Kohya S
|
c3f9eb10f1
|
format with black
|
2023-03-18 18:58:12 +09:00 |
|
Kohya S
|
563a4dc897
|
Merge pull request #241 from Linaqruf/main
Load training arguments from .yaml, and other small changes
|
2023-03-18 18:50:42 +09:00 |
|
orenwang
|
370ca9e8cd
|
fix exception on training model in diffusers format
|
2023-03-13 14:32:43 +08:00 |
|
tsukimiya
|
5dad64b684
|
Fixed an issue where max_train_steps was not set correctly when max_train_epochs was specified and gradient_accumulation_steps was set to 2 or more.
|
2023-03-13 14:37:28 +09:00 |
|
mio
|
e24a43ae0b
|
sample images with weight and no length limit
|
2023-03-12 16:08:31 +08:00 |
|
Linaqruf
|
44d4cfb453
|
feat: added function to load training config with .toml
|
2023-03-12 11:52:37 +07:00 |
|