Kohya S
fd3a445769
fix: revert default emb guidance scale and CFG scale for FLUX.1 sampling
2025-04-27 22:50:27 +09:00
Kohya S
629073cd9d
Add guidance scale for prompt param and flux sampling
2025-04-16 21:50:36 +09:00
Kohya S
06df0377f9
Merge branch 'sd3' into flux-sample-cfg
2025-04-16 21:27:08 +09:00
rockerBOO
e8b3254858
Add flux_train_utils tests for get get_noisy_model_input_and_timesteps
2025-03-20 15:01:15 -04:00
rockerBOO
16cef81aea
Refactor sigmas and timesteps
2025-03-20 14:32:56 -04:00
rockerBOO
f974c6b257
change order to match upstream
2025-03-19 14:27:43 -04:00
rockerBOO
5d5a7d2acf
Fix IP noise calculation
2025-03-19 13:50:04 -04:00
rockerBOO
1eddac26b0
Separate random to a variable, and make sure on device
2025-03-19 00:49:42 -04:00
rockerBOO
8e6817b0c2
Remove double noise
2025-03-19 00:45:13 -04:00
rockerBOO
d93ad90a71
Add perturbation on noisy_model_input if needed
2025-03-19 00:37:27 -04:00
rockerBOO
7197266703
Perturbed noise should be separate of input noise
2025-03-19 00:25:51 -04:00
rockerBOO
b81bcd0b01
Move IP noise gamma to noise creation to remove complexity and align noise for target loss
2025-03-18 21:36:55 -04:00
rockerBOO
6f4d365775
zeros_like because we are adding
2025-03-18 18:53:34 -04:00
rockerBOO
a4f3a9fc1a
Use ones_like
2025-03-18 18:44:21 -04:00
rockerBOO
b425466e7b
Fix IP noise gamma to use random values
2025-03-18 18:42:35 -04:00
rockerBOO
c8be141ae0
Apply IP gamma to noise fix
2025-03-18 15:42:18 -04:00
rockerBOO
0b25a05e3c
Add IP noise gamma for Flux
2025-03-18 15:40:40 -04:00
Kohya S
a9c5aa1f93
add CFG to FLUX.1 sample image
2025-01-05 22:28:51 +09:00
青龍聖者@bdsqlsz
abff4b0ec7
Unify controlnet parameters name and change scripts name. ( #1821 )
...
* Update sd3_train.py
* add freeze block lr
* Update train_util.py
* update
* Revert "add freeze block lr"
This reverts commit 8b1653548f .
# Conflicts:
# library/train_util.py
# sd3_train.py
* use same control net model path
* use controlnet_model_name_or_path
2024-12-07 17:12:46 +09:00
kohya-ss
e369b9a252
docs: update README with FLUX.1 ControlNet training details and improve argument help text
2024-12-02 23:38:54 +09:00
minux302
0b5229a955
save cn
2024-11-21 15:55:27 +00:00
minux302
4dd4cd6ec8
work cn load and validation
2024-11-18 12:47:01 +00:00
minux302
35778f0218
fix sample_images type
2024-11-17 11:09:05 +00:00
minux302
b2660bbe74
train run
2024-11-17 10:24:57 +00:00
minux302
42f6edf3a8
fix for adding controlnet
2024-11-15 23:48:51 +09:00
Kohya S
2cb7a6db02
feat: add block swap for FLUX.1/SD3 LoRA training
2024-11-12 21:39:13 +09:00
Kohya S
623017f716
refactor SD3 CLIP to transformers etc.
2024-10-24 19:49:28 +09:00
Kohya S
1a0f5b0c38
re-fix sample generation is not working in FLUX1 split mode #1647
2024-09-29 00:35:29 +09:00
Kohya S
a9aa52658a
fix sample generation is not working in FLUX1 fine tuning #1647
2024-09-28 17:12:56 +09:00
Plat
a823fd9fb8
Improve wandb logging ( #1576 )
...
* fix: wrong training steps were recorded to wandb, and no log was sent when logging_dir was not specified
* fix: checking of whether wandb is enabled
* feat: log images to wandb with their positive prompt as captions
* feat: logging sample images' caption for sd3 and flux
* fix: import wandb before use
2024-09-11 22:21:16 +09:00
Kohya S
b65ae9b439
T5XXL LoRA training, fp8 T5XXL support
2024-09-04 21:33:17 +09:00
Kohya S
4f6d915d15
update help and README
2024-09-01 19:12:29 +09:00
sdbds
25c9040f4f
Update flux_train_utils.py
2024-08-31 19:53:59 +08:00
Kohya S
0087a46e14
FLUX.1 LoRA supports CLIP-L
2024-08-27 19:59:40 +09:00
Kohya S
72287d39c7
feat: Add shift option to --timestep_sampling in FLUX.1 fine-tuning and LoRA training
2024-08-25 16:01:24 +09:00
Kohya S
7e459c00b2
Update T5 attention mask handling in FLUX
2024-08-21 08:02:33 +09:00
Kohya S
486fe8f70a
feat: reduce memory usage and add memory efficient option for model saving
2024-08-19 22:30:24 +09:00
Kohya S
400955d3ea
add fine tuning FLUX.1 (WIP)
2024-08-17 15:36:18 +09:00
Kohya S
7db4222119
add sample image generation during training
2024-08-14 22:15:26 +09:00