Kohya S
|
1434d8506f
|
Support SD3.5M multi resolutional training
|
2024-10-31 19:58:22 +09:00 |
|
kohya-ss
|
1065dd1b56
|
Fix to work dropout_rate for TEs
|
2024-10-27 19:36:36 +09:00 |
|
Kohya S
|
33e942e36e
|
Merge branch 'sd3' into fast_image_sizes
|
2024-10-01 08:38:09 +09:00 |
|
Kohya S
|
b65ae9b439
|
T5XXL LoRA training, fp8 T5XXL support
|
2024-09-04 21:33:17 +09:00 |
|
Kohya S
|
0087a46e14
|
FLUX.1 LoRA supports CLIP-L
|
2024-08-27 19:59:40 +09:00 |
|
Kohya S
|
81411a398e
|
speed up getting image sizes
|
2024-08-22 22:02:29 +09:00 |
|
Kohya S
|
2d8fa3387a
|
Fix to remove zero pad for t5xxl output
|
2024-08-22 19:56:27 +09:00 |
|
kohya-ss
|
98c91a7625
|
Fix bug in FLUX multi GPU training
|
2024-08-22 12:37:41 +09:00 |
|
Kohya S
|
7e459c00b2
|
Update T5 attention mask handling in FLUX
|
2024-08-21 08:02:33 +09:00 |
|
Kohya S
|
6ab48b09d8
|
feat: Support multi-resolution training with caching latents to disk
|
2024-08-20 21:39:43 +09:00 |
|
Kohya S
|
d25ae361d0
|
fix apply_t5_attn_mask to work
|
2024-08-11 19:07:07 +09:00 |
|
Kohya S
|
8a0f12dde8
|
update FLUX LoRA training
|
2024-08-10 23:42:05 +09:00 |
|
Kohya S
|
36b2e6fc28
|
add FLUX.1 LoRA training
|
2024-08-09 22:56:48 +09:00 |
|