Kohya S
|
81c0c965a2
|
faster block swap
|
2024-11-05 21:22:42 +09:00 |
|
Kohya S
|
a9aa52658a
|
fix sample generation is not working in FLUX1 fine tuning #1647
|
2024-09-28 17:12:56 +09:00 |
|
Kohya S
|
56a7bc171d
|
new block swap for FLUX.1 fine tuning
|
2024-09-26 08:26:31 +09:00 |
|
Kohya S
|
2e89cd2cc6
|
Fix issue with attention mask not being applied in single blocks
|
2024-08-24 12:39:54 +09:00 |
|
kohya-ss
|
98c91a7625
|
Fix bug in FLUX multi GPU training
|
2024-08-22 12:37:41 +09:00 |
|
Kohya S
|
e1cd19c0c0
|
add stochastic rounding, fix single block
|
2024-08-21 21:04:10 +09:00 |
|
Kohya S
|
2b07a92c8d
|
Fix error in applying mask in Attention and add LoRA converter script
|
2024-08-21 12:30:23 +09:00 |
|
Kohya S
|
7e459c00b2
|
Update T5 attention mask handling in FLUX
|
2024-08-21 08:02:33 +09:00 |
|
Kohya S
|
6e72a799c8
|
reduce peak VRAM usage by excluding some blocks to cuda
|
2024-08-19 21:55:28 +09:00 |
|
Kohya S
|
ef535ec6bb
|
add memory efficient training for FLUX.1
|
2024-08-18 16:54:18 +09:00 |
|
Kohya S
|
3921a4efda
|
add t5xxl max token length, support schnell
|
2024-08-16 17:06:05 +09:00 |
|
Kohya S
|
56d7651f08
|
add experimental split mode for FLUX
|
2024-08-13 22:28:39 +09:00 |
|
Kohya S
|
808d2d1f48
|
fix typos
|
2024-08-09 23:02:51 +09:00 |
|
Kohya S
|
36b2e6fc28
|
add FLUX.1 LoRA training
|
2024-08-09 22:56:48 +09:00 |
|