Richard Altman
|
f3aaaf0ce1
|
Update flux_models.py
fix: Fix variable reference error in sd-scripts/library
/flux_models.py on sd3 branch.
|
2025-08-24 19:20:57 -07:00 |
|
Kohya S
|
96feb61c0a
|
feat: implement modulation vector extraction for Chroma and update related methods
|
2025-07-30 21:34:49 +09:00 |
|
Kohya S
|
9eda938876
|
Merge branch 'sd3' into feature-chroma-support
|
2025-07-21 13:32:22 +09:00 |
|
Kohya S
|
0b763ef1f1
|
feat: fix timestep for input_vec for Chroma
|
2025-07-20 20:53:06 +09:00 |
|
Kohya S
|
b4e862626a
|
feat: add LoRA training support for Chroma
|
2025-07-20 19:00:09 +09:00 |
|
sdbds
|
00e12eed65
|
update for lost change
|
2025-04-06 16:09:29 +08:00 |
|
rockerBOO
|
9647f1e324
|
Fix validation block swap. Add custom offloading tests
|
2025-02-27 20:36:36 -05:00 |
|
minux302
|
31ca899b6b
|
fix depth value
|
2024-11-18 13:03:28 +00:00 |
|
minux302
|
4dd4cd6ec8
|
work cn load and validation
|
2024-11-18 12:47:01 +00:00 |
|
minux302
|
b2660bbe74
|
train run
|
2024-11-17 10:24:57 +00:00 |
|
minux302
|
e358b118af
|
fix dataloader
|
2024-11-16 14:49:29 +09:00 |
|
minux302
|
ccfaa001e7
|
add flux controlnet base module
|
2024-11-15 20:21:28 +09:00 |
|
Kohya S
|
2cb7a6db02
|
feat: add block swap for FLUX.1/SD3 LoRA training
|
2024-11-12 21:39:13 +09:00 |
|
Kohya S
|
cde90b8903
|
feat: implement block swapping for FLUX.1 LoRA (WIP)
|
2024-11-12 08:49:05 +09:00 |
|
Kohya S
|
02bd76e6c7
|
Refactor block swapping to utilize custom offloading utilities
|
2024-11-11 21:15:36 +09:00 |
|
Kohya S
|
186aa5b97d
|
fix illeagal block is swapped #1764
|
2024-11-07 22:16:05 +09:00 |
|
Kohya S
|
81c0c965a2
|
faster block swap
|
2024-11-05 21:22:42 +09:00 |
|
Kohya S
|
a9aa52658a
|
fix sample generation is not working in FLUX1 fine tuning #1647
|
2024-09-28 17:12:56 +09:00 |
|
Kohya S
|
56a7bc171d
|
new block swap for FLUX.1 fine tuning
|
2024-09-26 08:26:31 +09:00 |
|
Kohya S
|
2e89cd2cc6
|
Fix issue with attention mask not being applied in single blocks
|
2024-08-24 12:39:54 +09:00 |
|
kohya-ss
|
98c91a7625
|
Fix bug in FLUX multi GPU training
|
2024-08-22 12:37:41 +09:00 |
|
Kohya S
|
e1cd19c0c0
|
add stochastic rounding, fix single block
|
2024-08-21 21:04:10 +09:00 |
|
Kohya S
|
2b07a92c8d
|
Fix error in applying mask in Attention and add LoRA converter script
|
2024-08-21 12:30:23 +09:00 |
|
Kohya S
|
7e459c00b2
|
Update T5 attention mask handling in FLUX
|
2024-08-21 08:02:33 +09:00 |
|
Kohya S
|
6e72a799c8
|
reduce peak VRAM usage by excluding some blocks to cuda
|
2024-08-19 21:55:28 +09:00 |
|
Kohya S
|
ef535ec6bb
|
add memory efficient training for FLUX.1
|
2024-08-18 16:54:18 +09:00 |
|
Kohya S
|
3921a4efda
|
add t5xxl max token length, support schnell
|
2024-08-16 17:06:05 +09:00 |
|
Kohya S
|
56d7651f08
|
add experimental split mode for FLUX
|
2024-08-13 22:28:39 +09:00 |
|
Kohya S
|
808d2d1f48
|
fix typos
|
2024-08-09 23:02:51 +09:00 |
|
Kohya S
|
36b2e6fc28
|
add FLUX.1 LoRA training
|
2024-08-09 22:56:48 +09:00 |
|