Kohya S
3cc5b8db99
Diff Output Preserv loss for SDXL
2024-10-18 20:57:13 +09:00
kohya-ss
886ffb4d65
Merge branch 'sd3' into multi-gpu-caching
2024-10-13 19:14:06 +09:00
kohya-ss
bfc3a65acd
fix to work cache latents/text encoder outputs
2024-10-13 19:08:16 +09:00
kohya-ss
2244cf5b83
load images in parallel when caching latents
2024-10-13 18:22:19 +09:00
Kohya S
c65cf3812d
Merge branch 'sd3' into fast_image_sizes
2024-10-13 17:31:11 +09:00
kohya-ss
74228c9953
update cache_latents/text_encoder_outputs
2024-10-13 16:27:22 +09:00
kohya-ss
5bb9f7fb1a
Merge branch 'sd3' into multi-gpu-caching
2024-10-13 11:52:42 +09:00
Kohya S
e277b5789e
Update FLUX.1 support for compact models
2024-10-12 21:49:07 +09:00
kohya-ss
c80c304779
Refactor caching in train scripts
2024-10-12 20:18:41 +09:00
kohya-ss
ff4083b910
Merge branch 'sd3' into multi-gpu-caching
2024-10-12 16:39:36 +09:00
Kohya S.
d005652d03
Merge pull request #1686 from Akegarasu/sd3
...
fix: fix some distributed training error in windows
2024-10-12 14:33:02 +09:00
Kohya S
f2bc820133
support weighted captions for SD/SDXL
2024-10-11 08:48:55 +09:00
Akegarasu
9f4dac5731
torch 2.4
2024-10-10 14:08:55 +08:00
Akegarasu
3de42b6edb
fix: distributed training in windows
2024-10-10 14:03:59 +08:00
Kohya S
886f75345c
support weighted captions for sdxl LoRA and fine tuning
2024-10-10 08:27:15 +09:00
Kohya S
126159f7c4
Merge branch 'sd3' into sdxl-ctrl-net
2024-10-07 20:39:53 +09:00
Kohya S
83e3048cb0
load Diffusers format, check schnell/dev
2024-10-06 21:32:21 +09:00
Kohya S
c2440f9e53
fix cond image normlization, add independent LR for control
2024-10-03 21:32:21 +09:00
Kohya S
33e942e36e
Merge branch 'sd3' into fast_image_sizes
2024-10-01 08:38:09 +09:00
Kohya S
793999d116
sample generation in SDXL ControlNet training
2024-09-30 23:39:32 +09:00
Kohya S
012e7e63a5
fix to work linear/cosine scheduler closes #1651 ref #1393
2024-09-29 23:18:16 +09:00
Kohya S
8919b31145
use original ControlNet instead of Diffusers
2024-09-29 23:07:34 +09:00
Kohya S
56a63f01ae
Merge branch 'sd3' into multi-gpu-caching
2024-09-29 10:12:18 +09:00
青龍聖者@bdsqlsz
e0c3630203
Support Sdxl Controlnet ( #1648 )
...
* Create sdxl_train_controlnet.py
* add fuse_background_pass
* Update sdxl_train_controlnet.py
* add fuse and fix error
* update
* Update sdxl_train_controlnet.py
* Update sdxl_train_controlnet.py
* Update sdxl_train_controlnet.py
* update
* Update sdxl_train_controlnet.py
2024-09-29 10:11:15 +09:00
Kohya S
d050638571
Merge branch 'dev' into sd3
2024-09-29 10:00:01 +09:00
Kohya S
1567549220
update help text #1632
2024-09-29 09:51:36 +09:00
Kohya S
fe2aa32484
adjust min/max bucket reso divisible by reso steps #1632
2024-09-29 09:49:25 +09:00
Kohya S
1a0f5b0c38
re-fix sample generation is not working in FLUX1 split mode #1647
2024-09-29 00:35:29 +09:00
Kohya S
a9aa52658a
fix sample generation is not working in FLUX1 fine tuning #1647
2024-09-28 17:12:56 +09:00
kohya-ss
24b1fdb664
remove debug print
2024-09-26 22:22:06 +09:00
kohya-ss
9249d00311
experimental support for multi-gpus latents caching
2024-09-26 22:19:56 +09:00
Kohya S
3ebb65f945
Merge branch 'dev' into sd3
2024-09-26 21:41:25 +09:00
Kohya S
a94bc84dec
fix to work bitsandbytes optimizers with full path #1640
2024-09-26 21:37:31 +09:00
Kohya S.
4296e286b8
Merge pull request #1640 from sdbds/ademamix8bit
...
New optimizer:AdEMAMix8bit and PagedAdEMAMix8bit
2024-09-26 21:20:19 +09:00
Kohya S
392e8dedd8
fix flip_aug, alpha_mask, random_crop issue in caching in caching strategy
2024-09-26 21:14:11 +09:00
Kohya S
2cd6aa281c
Merge branch 'dev' into sd3
2024-09-26 20:52:08 +09:00
Kohya S
bf91bea2e4
fix flip_aug, alpha_mask, random_crop issue in caching
2024-09-26 20:51:40 +09:00
Kohya S
56a7bc171d
new block swap for FLUX.1 fine tuning
2024-09-26 08:26:31 +09:00
sdbds
1beddd84e5
delete code for cleaning
2024-09-25 22:58:26 +08:00
Kohya S
65fb69f808
Merge branch 'dev' into sd3
2024-09-25 20:56:16 +09:00
Kohya S.
c1d16a76d6
Merge pull request #1628 from recris/huber-timesteps
...
Make timesteps work in the standard way when Huber loss is used
2024-09-25 20:52:55 +09:00
sdbds
ab7b231870
init
2024-09-25 19:38:52 +08:00
Kohya S
29177d2f03
retain alpha in pil_resize backport #1619
2024-09-23 21:14:03 +09:00
recris
e1f23af1bc
make timestep sampling behave in the standard way when huber loss is used
2024-09-21 13:21:56 +01:00
Ed McManus
de4bb657b0
Update utils.py
...
Cleanup
2024-09-19 14:38:32 -07:00
Ed McManus
3957372ded
Retain alpha in pil_resize
...
Currently the alpha channel is dropped by `pil_resize()` when `--alpha_mask` is supplied and the image width does not exceed the bucket.
This codepath is entered on the last line, here:
```
def trim_and_resize_if_required(
random_crop: bool, image: np.ndarray, reso, resized_size: Tuple[int, int]
) -> Tuple[np.ndarray, Tuple[int, int], Tuple[int, int, int, int]]:
image_height, image_width = image.shape[0:2]
original_size = (image_width, image_height) # size before resize
if image_width != resized_size[0] or image_height != resized_size[1]:
# リサイズする
if image_width > resized_size[0] and image_height > resized_size[1]:
image = cv2.resize(image, resized_size, interpolation=cv2.INTER_AREA) # INTER_AREAでやりたいのでcv2でリサイズ
else:
image = pil_resize(image, resized_size)
```
2024-09-19 14:30:03 -07:00
Kohya S
b844c70d14
Merge branch 'dev' into sd3
2024-09-19 21:51:33 +09:00
Maru-mee
e7040669bc
Bug fix: alpha_mask load
2024-09-19 15:47:06 +09:00
Kohya S
1286e00bb0
fix to call train/eval in schedulefree #1605
2024-09-18 21:31:54 +09:00
Kohya S.
bbd160b4ca
sd3 schedule free opt ( #1605 )
...
* New ScheduleFree support for Flux (#1600 )
* init
* use no schedule
* fix typo
* update for eval()
* fix typo
* update
* Update train_util.py
* Update requirements.txt
* update sfwrapper WIP
* no need to check schedulefree optimizer
* remove debug print
* comment out schedulefree wrapper
* update readme
---------
Co-authored-by: 青龍聖者@bdsqlsz <865105819@qq.com >
2024-09-18 07:55:04 +09:00