recris
|
420a180d93
|
Implement pseudo Huber loss for Flux and SD3
|
2024-11-27 18:37:09 +00:00 |
|
Kohya S
|
2bb0f547d7
|
update grad hook creation to fix TE lr in sd3 fine tuning
|
2024-11-14 19:33:12 +09:00 |
|
Kohya S
|
2cb7a6db02
|
feat: add block swap for FLUX.1/SD3 LoRA training
|
2024-11-12 21:39:13 +09:00 |
|
Kohya S
|
3fe94b058a
|
update comment
|
2024-11-12 08:09:07 +09:00 |
|
sdbds
|
26bd4540a6
|
init
|
2024-11-11 09:25:28 +08:00 |
|
feffy380
|
b3248a8eef
|
fix: sort order when getting image size from cache file
|
2024-11-07 14:31:05 +01:00 |
|
Kohya S
|
9aa6f52ac3
|
Fix memory leak in latent caching. bmp failed to cache
|
2024-11-01 21:43:21 +09:00 |
|
Kohya S
|
1434d8506f
|
Support SD3.5M multi resolutional training
|
2024-10-31 19:58:22 +09:00 |
|
kohya-ss
|
d4f7849592
|
prevent unintended cast for disk cached TE outputs
|
2024-10-27 19:35:56 +09:00 |
|
Kohya S
|
f52fb66e8f
|
Merge branch 'sd3' into sd3_5_support
|
2024-10-25 19:03:58 +09:00 |
|
Kohya S
|
5fba6f514a
|
Merge branch 'dev' into sd3
|
2024-10-25 19:03:27 +09:00 |
|
Kohya S
|
623017f716
|
refactor SD3 CLIP to transformers etc.
|
2024-10-24 19:49:28 +09:00 |
|
catboxanon
|
be14c06267
|
Remove v-pred warnings
Different model architectures, such as SDXL, can take advantage of
v-pred. It doesn't make sense to include these warnings anymore.
|
2024-10-22 12:13:51 -04:00 |
|
Kohya S
|
7fe8e162cb
|
fix to work ControlNetSubset with custom_attributes
|
2024-10-20 08:45:27 +09:00 |
|
Kohya S
|
3cc5b8db99
|
Diff Output Preserv loss for SDXL
|
2024-10-18 20:57:13 +09:00 |
|
kohya-ss
|
886ffb4d65
|
Merge branch 'sd3' into multi-gpu-caching
|
2024-10-13 19:14:06 +09:00 |
|
kohya-ss
|
bfc3a65acd
|
fix to work cache latents/text encoder outputs
|
2024-10-13 19:08:16 +09:00 |
|
kohya-ss
|
2244cf5b83
|
load images in parallel when caching latents
|
2024-10-13 18:22:19 +09:00 |
|
Kohya S
|
c65cf3812d
|
Merge branch 'sd3' into fast_image_sizes
|
2024-10-13 17:31:11 +09:00 |
|
kohya-ss
|
c80c304779
|
Refactor caching in train scripts
|
2024-10-12 20:18:41 +09:00 |
|
kohya-ss
|
ff4083b910
|
Merge branch 'sd3' into multi-gpu-caching
|
2024-10-12 16:39:36 +09:00 |
|
Kohya S.
|
d005652d03
|
Merge pull request #1686 from Akegarasu/sd3
fix: fix some distributed training error in windows
|
2024-10-12 14:33:02 +09:00 |
|
Akegarasu
|
9f4dac5731
|
torch 2.4
|
2024-10-10 14:08:55 +08:00 |
|
Akegarasu
|
3de42b6edb
|
fix: distributed training in windows
|
2024-10-10 14:03:59 +08:00 |
|
Kohya S
|
c2440f9e53
|
fix cond image normlization, add independent LR for control
|
2024-10-03 21:32:21 +09:00 |
|
Kohya S
|
33e942e36e
|
Merge branch 'sd3' into fast_image_sizes
|
2024-10-01 08:38:09 +09:00 |
|
Kohya S
|
793999d116
|
sample generation in SDXL ControlNet training
|
2024-09-30 23:39:32 +09:00 |
|
Kohya S
|
012e7e63a5
|
fix to work linear/cosine scheduler closes #1651 ref #1393
|
2024-09-29 23:18:16 +09:00 |
|
Kohya S
|
56a63f01ae
|
Merge branch 'sd3' into multi-gpu-caching
|
2024-09-29 10:12:18 +09:00 |
|
青龍聖者@bdsqlsz
|
e0c3630203
|
Support Sdxl Controlnet (#1648)
* Create sdxl_train_controlnet.py
* add fuse_background_pass
* Update sdxl_train_controlnet.py
* add fuse and fix error
* update
* Update sdxl_train_controlnet.py
* Update sdxl_train_controlnet.py
* Update sdxl_train_controlnet.py
* update
* Update sdxl_train_controlnet.py
|
2024-09-29 10:11:15 +09:00 |
|
Kohya S
|
d050638571
|
Merge branch 'dev' into sd3
|
2024-09-29 10:00:01 +09:00 |
|
Kohya S
|
1567549220
|
update help text #1632
|
2024-09-29 09:51:36 +09:00 |
|
Kohya S
|
fe2aa32484
|
adjust min/max bucket reso divisible by reso steps #1632
|
2024-09-29 09:49:25 +09:00 |
|
kohya-ss
|
24b1fdb664
|
remove debug print
|
2024-09-26 22:22:06 +09:00 |
|
kohya-ss
|
9249d00311
|
experimental support for multi-gpus latents caching
|
2024-09-26 22:19:56 +09:00 |
|
Kohya S
|
3ebb65f945
|
Merge branch 'dev' into sd3
|
2024-09-26 21:41:25 +09:00 |
|
Kohya S
|
a94bc84dec
|
fix to work bitsandbytes optimizers with full path #1640
|
2024-09-26 21:37:31 +09:00 |
|
Kohya S.
|
4296e286b8
|
Merge pull request #1640 from sdbds/ademamix8bit
New optimizer:AdEMAMix8bit and PagedAdEMAMix8bit
|
2024-09-26 21:20:19 +09:00 |
|
Kohya S
|
392e8dedd8
|
fix flip_aug, alpha_mask, random_crop issue in caching in caching strategy
|
2024-09-26 21:14:11 +09:00 |
|
Kohya S
|
2cd6aa281c
|
Merge branch 'dev' into sd3
|
2024-09-26 20:52:08 +09:00 |
|
Kohya S
|
bf91bea2e4
|
fix flip_aug, alpha_mask, random_crop issue in caching
|
2024-09-26 20:51:40 +09:00 |
|
sdbds
|
1beddd84e5
|
delete code for cleaning
|
2024-09-25 22:58:26 +08:00 |
|
Kohya S
|
65fb69f808
|
Merge branch 'dev' into sd3
|
2024-09-25 20:56:16 +09:00 |
|
sdbds
|
ab7b231870
|
init
|
2024-09-25 19:38:52 +08:00 |
|
recris
|
e1f23af1bc
|
make timestep sampling behave in the standard way when huber loss is used
|
2024-09-21 13:21:56 +01:00 |
|
Kohya S
|
b844c70d14
|
Merge branch 'dev' into sd3
|
2024-09-19 21:51:33 +09:00 |
|
Maru-mee
|
e7040669bc
|
Bug fix: alpha_mask load
|
2024-09-19 15:47:06 +09:00 |
|
Kohya S
|
1286e00bb0
|
fix to call train/eval in schedulefree #1605
|
2024-09-18 21:31:54 +09:00 |
|
Kohya S.
|
bbd160b4ca
|
sd3 schedule free opt (#1605)
* New ScheduleFree support for Flux (#1600)
* init
* use no schedule
* fix typo
* update for eval()
* fix typo
* update
* Update train_util.py
* Update requirements.txt
* update sfwrapper WIP
* no need to check schedulefree optimizer
* remove debug print
* comment out schedulefree wrapper
* update readme
---------
Co-authored-by: 青龍聖者@bdsqlsz <865105819@qq.com>
|
2024-09-18 07:55:04 +09:00 |
|
Kohya S
|
96c677b459
|
fix to work lienar/cosine lr scheduler closes #1602 ref #1393
|
2024-09-16 10:42:09 +09:00 |
|