kohya-ss
bfc3a65acd
fix to work cache latents/text encoder outputs
2024-10-13 19:08:16 +09:00
kohya-ss
2244cf5b83
load images in parallel when caching latents
2024-10-13 18:22:19 +09:00
kohya-ss
c80c304779
Refactor caching in train scripts
2024-10-12 20:18:41 +09:00
kohya-ss
ff4083b910
Merge branch 'sd3' into multi-gpu-caching
2024-10-12 16:39:36 +09:00
Kohya S.
d005652d03
Merge pull request #1686 from Akegarasu/sd3
...
fix: fix some distributed training error in windows
2024-10-12 14:33:02 +09:00
Akegarasu
9f4dac5731
torch 2.4
2024-10-10 14:08:55 +08:00
Akegarasu
3de42b6edb
fix: distributed training in windows
2024-10-10 14:03:59 +08:00
Kohya S
c2440f9e53
fix cond image normlization, add independent LR for control
2024-10-03 21:32:21 +09:00
Kohya S
793999d116
sample generation in SDXL ControlNet training
2024-09-30 23:39:32 +09:00
Kohya S
56a63f01ae
Merge branch 'sd3' into multi-gpu-caching
2024-09-29 10:12:18 +09:00
青龍聖者@bdsqlsz
e0c3630203
Support Sdxl Controlnet ( #1648 )
...
* Create sdxl_train_controlnet.py
* add fuse_background_pass
* Update sdxl_train_controlnet.py
* add fuse and fix error
* update
* Update sdxl_train_controlnet.py
* Update sdxl_train_controlnet.py
* Update sdxl_train_controlnet.py
* update
* Update sdxl_train_controlnet.py
2024-09-29 10:11:15 +09:00
Kohya S
d050638571
Merge branch 'dev' into sd3
2024-09-29 10:00:01 +09:00
Kohya S
1567549220
update help text #1632
2024-09-29 09:51:36 +09:00
Kohya S
fe2aa32484
adjust min/max bucket reso divisible by reso steps #1632
2024-09-29 09:49:25 +09:00
kohya-ss
24b1fdb664
remove debug print
2024-09-26 22:22:06 +09:00
kohya-ss
9249d00311
experimental support for multi-gpus latents caching
2024-09-26 22:19:56 +09:00
Kohya S
3ebb65f945
Merge branch 'dev' into sd3
2024-09-26 21:41:25 +09:00
Kohya S
a94bc84dec
fix to work bitsandbytes optimizers with full path #1640
2024-09-26 21:37:31 +09:00
Kohya S.
4296e286b8
Merge pull request #1640 from sdbds/ademamix8bit
...
New optimizer:AdEMAMix8bit and PagedAdEMAMix8bit
2024-09-26 21:20:19 +09:00
Kohya S
392e8dedd8
fix flip_aug, alpha_mask, random_crop issue in caching in caching strategy
2024-09-26 21:14:11 +09:00
Kohya S
2cd6aa281c
Merge branch 'dev' into sd3
2024-09-26 20:52:08 +09:00
Kohya S
bf91bea2e4
fix flip_aug, alpha_mask, random_crop issue in caching
2024-09-26 20:51:40 +09:00
sdbds
1beddd84e5
delete code for cleaning
2024-09-25 22:58:26 +08:00
Kohya S
65fb69f808
Merge branch 'dev' into sd3
2024-09-25 20:56:16 +09:00
sdbds
ab7b231870
init
2024-09-25 19:38:52 +08:00
recris
e1f23af1bc
make timestep sampling behave in the standard way when huber loss is used
2024-09-21 13:21:56 +01:00
Kohya S
b844c70d14
Merge branch 'dev' into sd3
2024-09-19 21:51:33 +09:00
Maru-mee
e7040669bc
Bug fix: alpha_mask load
2024-09-19 15:47:06 +09:00
Kohya S
1286e00bb0
fix to call train/eval in schedulefree #1605
2024-09-18 21:31:54 +09:00
Kohya S.
bbd160b4ca
sd3 schedule free opt ( #1605 )
...
* New ScheduleFree support for Flux (#1600 )
* init
* use no schedule
* fix typo
* update for eval()
* fix typo
* update
* Update train_util.py
* Update requirements.txt
* update sfwrapper WIP
* no need to check schedulefree optimizer
* remove debug print
* comment out schedulefree wrapper
* update readme
---------
Co-authored-by: 青龍聖者@bdsqlsz <865105819@qq.com >
2024-09-18 07:55:04 +09:00
Kohya S
96c677b459
fix to work lienar/cosine lr scheduler closes #1602 ref #1393
2024-09-16 10:42:09 +09:00
Plat
a823fd9fb8
Improve wandb logging ( #1576 )
...
* fix: wrong training steps were recorded to wandb, and no log was sent when logging_dir was not specified
* fix: checking of whether wandb is enabled
* feat: log images to wandb with their positive prompt as captions
* feat: logging sample images' caption for sd3 and flux
* fix: import wandb before use
2024-09-11 22:21:16 +09:00
Kohya S
c7c666b182
fix typo
2024-09-11 22:12:31 +09:00
cocktailpeanut
8311e88225
typo fix
2024-09-11 09:02:29 -04:00
Kohya S
eaafa5c9da
Merge branch 'dev' into sd3
2024-09-11 21:46:21 +09:00
Kohya S
6dbfd47a59
Fix to work PIECEWISE_CONSTANT, update requirement.txt and README #1393
2024-09-11 21:44:36 +09:00
青龍聖者@bdsqlsz
fd68703f37
Add New lr scheduler ( #1393 )
...
* add new lr scheduler
* fix bugs and use num_cycles / 2
* Update requirements.txt
* add num_cycles for min lr
* keep PIECEWISE_CONSTANT
* allow use float with warmup or decay ratio.
* Update train_util.py
2024-09-11 21:25:45 +09:00
Kohya S
ce144476cf
Merge branch 'dev' into sd3
2024-09-07 10:59:22 +09:00
Kohya S
62ec3e6424
Merge branch 'main' into dev
2024-09-07 10:52:49 +09:00
Kohya S
0005867ba5
update README, format code
2024-09-07 10:45:18 +09:00
Kohya S.
16bb5699ac
Merge pull request #1426 from sdbds/resize
...
Replacing CV2 resize to Pil resize
2024-09-07 10:22:52 +09:00
Kohya S.
319e4d9831
Merge pull request #1433 from millie-v/sample-image-without-cuda
...
Generate sample images without having CUDA (such as on Macs)
2024-09-07 10:19:55 +09:00
Kohya S
92e7600cc2
Move freeze_blocks to sd3_train because it's only for sd3
2024-09-01 18:57:07 +09:00
青龍聖者@bdsqlsz
ef510b3cb9
Sd3 freeze x_block ( #1417 )
...
* Update sd3_train.py
* add freeze block lr
* Update train_util.py
* update
2024-09-01 18:41:01 +09:00
Nando Metzger
2a3aefb4e4
Update train_util.py, bug fix
2024-08-30 08:15:05 +02:00
kohya-ss
98c91a7625
Fix bug in FLUX multi GPU training
2024-08-22 12:37:41 +09:00
Kohya S
6ab48b09d8
feat: Support multi-resolution training with caching latents to disk
2024-08-20 21:39:43 +09:00
Kohya S
400955d3ea
add fine tuning FLUX.1 (WIP)
2024-08-17 15:36:18 +09:00
Kohya S
e45d3f8634
add merge LoRA script
2024-08-16 22:19:21 +09:00
kohya-ss
f5ce754bc2
Merge branch 'dev' into sd3
2024-08-13 21:00:44 +09:00