Kohya S
2cb7a6db02
feat: add block swap for FLUX.1/SD3 LoRA training
2024-11-12 21:39:13 +09:00
Kohya S
5e86323f12
Update README and clean-up the code for SD3 timesteps
2024-11-07 21:27:12 +09:00
Dango233
bafd10d558
Fix typo
2024-11-07 18:21:04 +08:00
Dango233
40ed54bfc0
Simplify Timestep weighting
...
* Remove diffusers dependency in ts & sigma calc
* support Shift setting
* Add uniform distribution
* Default to Uniform distribution and shift 1
2024-11-07 09:53:54 +00:00
Kohya S
1434d8506f
Support SD3.5M multi resolutional training
2024-10-31 19:58:22 +09:00
Kohya S
bdddc20d68
support SD3.5M
2024-10-30 12:51:49 +09:00
Kohya S
75554867ce
Fix error on saving T5XXL
2024-10-29 08:34:31 +09:00
Kohya S
af8e216035
Fix sample image gen to work with block swap
2024-10-28 22:08:57 +09:00
Kohya S
db2b4d41b9
Add dropout rate arguments for CLIP-L, CLIP-G, and T5, fix Text Encoders LoRA not trained
2024-10-27 16:42:58 +09:00
kohya-ss
014064fd81
fix sample image generation without seed failed close #1726
2024-10-26 18:59:45 +09:00
kohya-ss
d2c549d7b2
support SD3 LoRA
2024-10-25 21:58:31 +09:00
Kohya S
e3c43bda49
reduce memory usage in sample image generation
2024-10-24 20:35:47 +09:00
Kohya S
623017f716
refactor SD3 CLIP to transformers etc.
2024-10-24 19:49:28 +09:00
Plat
a823fd9fb8
Improve wandb logging ( #1576 )
...
* fix: wrong training steps were recorded to wandb, and no log was sent when logging_dir was not specified
* fix: checking of whether wandb is enabled
* feat: log images to wandb with their positive prompt as captions
* feat: logging sample images' caption for sd3 and flux
* fix: import wandb before use
2024-09-11 22:21:16 +09:00
Kohya S
002d75179a
sample images for training
2024-07-29 23:18:34 +09:00
Kohya S
41dee60383
Refactor caching mechanism for latents and text encoder outputs, etc.
2024-07-27 13:50:05 +09:00
Kohya S
3d402927ef
WIP: update new latents caching
2024-07-09 23:15:38 +09:00
Kohya S
9dc7997803
fix typo
2024-07-09 20:37:00 +09:00
Kohya S
3ea4fce5e0
load models one by one
2024-07-08 22:04:43 +09:00
Kohya S
c9de7c4e9a
WIP: new latents caching
2024-07-08 19:48:28 +09:00
Kohya S
19086465e8
Fix fp16 mixed precision, model is in bf16 without full_bf16
2024-06-29 17:21:25 +09:00
Kohya S
8f2ba27869
support text_encoder_batch_size for caching
2024-06-26 20:36:22 +09:00
Kohya S
d53ea22b2a
sd3 training
2024-06-23 23:38:20 +09:00