Kohya S
1567549220
update help text #1632
2024-09-29 09:51:36 +09:00
Kohya S
fe2aa32484
adjust min/max bucket reso divisible by reso steps #1632
2024-09-29 09:49:25 +09:00
kohya-ss
24b1fdb664
remove debug print
2024-09-26 22:22:06 +09:00
kohya-ss
9249d00311
experimental support for multi-gpus latents caching
2024-09-26 22:19:56 +09:00
Kohya S
3ebb65f945
Merge branch 'dev' into sd3
2024-09-26 21:41:25 +09:00
Kohya S
a94bc84dec
fix to work bitsandbytes optimizers with full path #1640
2024-09-26 21:37:31 +09:00
Kohya S.
4296e286b8
Merge pull request #1640 from sdbds/ademamix8bit
...
New optimizer:AdEMAMix8bit and PagedAdEMAMix8bit
2024-09-26 21:20:19 +09:00
Kohya S
392e8dedd8
fix flip_aug, alpha_mask, random_crop issue in caching in caching strategy
2024-09-26 21:14:11 +09:00
Kohya S
2cd6aa281c
Merge branch 'dev' into sd3
2024-09-26 20:52:08 +09:00
Kohya S
bf91bea2e4
fix flip_aug, alpha_mask, random_crop issue in caching
2024-09-26 20:51:40 +09:00
sdbds
1beddd84e5
delete code for cleaning
2024-09-25 22:58:26 +08:00
Kohya S
65fb69f808
Merge branch 'dev' into sd3
2024-09-25 20:56:16 +09:00
sdbds
ab7b231870
init
2024-09-25 19:38:52 +08:00
recris
e1f23af1bc
make timestep sampling behave in the standard way when huber loss is used
2024-09-21 13:21:56 +01:00
Kohya S
b844c70d14
Merge branch 'dev' into sd3
2024-09-19 21:51:33 +09:00
Maru-mee
e7040669bc
Bug fix: alpha_mask load
2024-09-19 15:47:06 +09:00
Kohya S
1286e00bb0
fix to call train/eval in schedulefree #1605
2024-09-18 21:31:54 +09:00
Kohya S.
bbd160b4ca
sd3 schedule free opt ( #1605 )
...
* New ScheduleFree support for Flux (#1600 )
* init
* use no schedule
* fix typo
* update for eval()
* fix typo
* update
* Update train_util.py
* Update requirements.txt
* update sfwrapper WIP
* no need to check schedulefree optimizer
* remove debug print
* comment out schedulefree wrapper
* update readme
---------
Co-authored-by: 青龍聖者@bdsqlsz <865105819@qq.com >
2024-09-18 07:55:04 +09:00
Kohya S
96c677b459
fix to work lienar/cosine lr scheduler closes #1602 ref #1393
2024-09-16 10:42:09 +09:00
Plat
a823fd9fb8
Improve wandb logging ( #1576 )
...
* fix: wrong training steps were recorded to wandb, and no log was sent when logging_dir was not specified
* fix: checking of whether wandb is enabled
* feat: log images to wandb with their positive prompt as captions
* feat: logging sample images' caption for sd3 and flux
* fix: import wandb before use
2024-09-11 22:21:16 +09:00
Kohya S
c7c666b182
fix typo
2024-09-11 22:12:31 +09:00
cocktailpeanut
8311e88225
typo fix
2024-09-11 09:02:29 -04:00
Kohya S
eaafa5c9da
Merge branch 'dev' into sd3
2024-09-11 21:46:21 +09:00
Kohya S
6dbfd47a59
Fix to work PIECEWISE_CONSTANT, update requirement.txt and README #1393
2024-09-11 21:44:36 +09:00
青龍聖者@bdsqlsz
fd68703f37
Add New lr scheduler ( #1393 )
...
* add new lr scheduler
* fix bugs and use num_cycles / 2
* Update requirements.txt
* add num_cycles for min lr
* keep PIECEWISE_CONSTANT
* allow use float with warmup or decay ratio.
* Update train_util.py
2024-09-11 21:25:45 +09:00
Kohya S
ce144476cf
Merge branch 'dev' into sd3
2024-09-07 10:59:22 +09:00
Kohya S
62ec3e6424
Merge branch 'main' into dev
2024-09-07 10:52:49 +09:00
Kohya S
0005867ba5
update README, format code
2024-09-07 10:45:18 +09:00
Kohya S.
16bb5699ac
Merge pull request #1426 from sdbds/resize
...
Replacing CV2 resize to Pil resize
2024-09-07 10:22:52 +09:00
Kohya S.
319e4d9831
Merge pull request #1433 from millie-v/sample-image-without-cuda
...
Generate sample images without having CUDA (such as on Macs)
2024-09-07 10:19:55 +09:00
Kohya S
92e7600cc2
Move freeze_blocks to sd3_train because it's only for sd3
2024-09-01 18:57:07 +09:00
青龍聖者@bdsqlsz
ef510b3cb9
Sd3 freeze x_block ( #1417 )
...
* Update sd3_train.py
* add freeze block lr
* Update train_util.py
* update
2024-09-01 18:41:01 +09:00
Nando Metzger
2a3aefb4e4
Update train_util.py, bug fix
2024-08-30 08:15:05 +02:00
Kohya S
81411a398e
speed up getting image sizes
2024-08-22 22:02:29 +09:00
kohya-ss
98c91a7625
Fix bug in FLUX multi GPU training
2024-08-22 12:37:41 +09:00
Kohya S
6ab48b09d8
feat: Support multi-resolution training with caching latents to disk
2024-08-20 21:39:43 +09:00
Kohya S
400955d3ea
add fine tuning FLUX.1 (WIP)
2024-08-17 15:36:18 +09:00
Kohya S
e45d3f8634
add merge LoRA script
2024-08-16 22:19:21 +09:00
kohya-ss
f5ce754bc2
Merge branch 'dev' into sd3
2024-08-13 21:00:44 +09:00
Kohya S
8a0f12dde8
update FLUX LoRA training
2024-08-10 23:42:05 +09:00
Kohya S
da4d0fe016
support attn mask for l+g/t5
2024-08-05 20:51:34 +09:00
Kohya S
41dee60383
Refactor caching mechanism for latents and text encoder outputs, etc.
2024-07-27 13:50:05 +09:00
sdbds
9ca7a5b6cc
instead cv2 LANCZOS4 resize to pil resize
2024-07-20 21:59:11 +08:00
sdbds
1f16b80e88
Revert "judge image size for using diff interpolation"
...
This reverts commit 87526942a6 .
2024-07-20 21:35:24 +08:00
Millie
2e67978ee2
Generate sample images without having CUDA (such as on Macs)
2024-07-18 11:52:58 -07:00
sdbds
87526942a6
judge image size for using diff interpolation
2024-07-12 22:56:38 +08:00
Kohya S
082f13658b
reduce peak GPU memory usage before training
2024-07-12 21:28:01 +09:00
Kohya S
3d402927ef
WIP: update new latents caching
2024-07-09 23:15:38 +09:00
Kohya S
c9de7c4e9a
WIP: new latents caching
2024-07-08 19:48:28 +09:00
Kohya S
8f2ba27869
support text_encoder_batch_size for caching
2024-06-26 20:36:22 +09:00