Commit Graph

319 Commits

Author SHA1 Message Date
Hina Chen
05bb9183fa Add Validation loss for LoRA training 2024-12-27 16:47:59 +08:00
Kohya S.
14c9ba925f Merge pull request #1811 from rockerBOO/schedule-free-prodigy
Allow unknown schedule-free optimizers to continue to module loader
2024-12-01 21:51:25 +09:00
Kohya S
cc11989755 fix: refactor huber-loss calculation in multiple training scripts 2024-12-01 21:20:28 +09:00
rockerBOO
6593cfbec1 Fix d * lr step log 2024-11-29 14:16:24 -05:00
rockerBOO
87f5224e2d Support d*lr for ProdigyPlus optimizer 2024-11-29 14:16:00 -05:00
recris
420a180d93 Implement pseudo Huber loss for Flux and SD3 2024-11-27 18:37:09 +00:00
Kohya S
2cb7a6db02 feat: add block swap for FLUX.1/SD3 LoRA training 2024-11-12 21:39:13 +09:00
Kohya S
cde90b8903 feat: implement block swapping for FLUX.1 LoRA (WIP) 2024-11-12 08:49:05 +09:00
kohya-ss
1065dd1b56 Fix to work dropout_rate for TEs 2024-10-27 19:36:36 +09:00
kohya-ss
a1255d637f Fix SD3 LoRA training to work (WIP) 2024-10-27 17:03:36 +09:00
Kohya S
db2b4d41b9 Add dropout rate arguments for CLIP-L, CLIP-G, and T5, fix Text Encoders LoRA not trained 2024-10-27 16:42:58 +09:00
kohya-ss
d2c549d7b2 support SD3 LoRA 2024-10-25 21:58:31 +09:00
Kohya S
5fba6f514a Merge branch 'dev' into sd3 2024-10-25 19:03:27 +09:00
catboxanon
e1b63c2249 Only add warning for deprecated scaling vpred loss function 2024-10-21 08:12:53 -04:00
catboxanon
8fc30f8205 Fix training for V-pred and ztSNR
1) Updates debiased estimation loss function for V-pred.
2) Prevents now-deprecated scaling of loss if ztSNR is enabled.
2024-10-21 07:34:33 -04:00
Kohya S
3cc5b8db99 Diff Output Preserv loss for SDXL 2024-10-18 20:57:13 +09:00
kohya-ss
c80c304779 Refactor caching in train scripts 2024-10-12 20:18:41 +09:00
kohya-ss
ff4083b910 Merge branch 'sd3' into multi-gpu-caching 2024-10-12 16:39:36 +09:00
Kohya S
886f75345c support weighted captions for sdxl LoRA and fine tuning 2024-10-10 08:27:15 +09:00
Kohya S
ba08a89894 call optimizer eval/train for sample_at_first, also set train after resuming closes #1667 2024-10-04 20:35:16 +09:00
Kohya S
56a63f01ae Merge branch 'sd3' into multi-gpu-caching 2024-09-29 10:12:18 +09:00
Kohya S
d050638571 Merge branch 'dev' into sd3 2024-09-29 10:00:01 +09:00
Kohya S
fe2aa32484 adjust min/max bucket reso divisible by reso steps #1632 2024-09-29 09:49:25 +09:00
kohya-ss
9249d00311 experimental support for multi-gpus latents caching 2024-09-26 22:19:56 +09:00
Kohya S
583d4a436c add compatibility for int LR (D-Adaptation etc.) #1620 2024-09-20 22:22:24 +09:00
Akegarasu
0535cd29b9 fix: backward compatibility for text_encoder_lr 2024-09-20 10:05:22 +08:00
Kohya S
1286e00bb0 fix to call train/eval in schedulefree #1605 2024-09-18 21:31:54 +09:00
Plat
a823fd9fb8 Improve wandb logging (#1576)
* fix: wrong training steps were recorded to wandb, and no log was sent when logging_dir was not specified

* fix: checking of whether wandb is enabled

* feat: log images to wandb with their positive prompt as captions

* feat: logging sample images' caption for sd3 and flux

* fix: import wandb before use
2024-09-11 22:21:16 +09:00
Kohya S
d10ff62a78 support individual LR for CLIP-L/T5XXL 2024-09-10 20:32:09 +09:00
Kohya S
2889108d85 feat: Add --cpu_offload_checkpointing option to LoRA training 2024-09-05 20:58:33 +09:00
Kohya S
b65ae9b439 T5XXL LoRA training, fp8 T5XXL support 2024-09-04 21:33:17 +09:00
Akegarasu
35882f8d5b fix 2024-08-29 23:03:43 +08:00
Akegarasu
34f2315047 fix: text_encoder_conds referenced before assignment 2024-08-29 22:33:37 +08:00
Kohya S
0087a46e14 FLUX.1 LoRA supports CLIP-L 2024-08-27 19:59:40 +09:00
Kohya S
9e72be0a13 Fix debug_dataset to work 2024-08-20 08:19:00 +09:00
Kohya S.
e2d822cad7 Merge pull request #1452 from fireicewolf/sd3-devel
Fix AttributeError: 'T5EncoderModel' object has no attribute 'text_model', while loading T5 model in GPU.
2024-08-15 21:12:19 +09:00
Kohya S
7db4222119 add sample image generation during training 2024-08-14 22:15:26 +09:00
DukeG
9760d097b0 Fix AttributeError: 'T5EncoderModel' object has no attribute 'text_model'
While loading T5 model in GPU.
2024-08-14 19:58:54 +08:00
Kohya S
8a0f12dde8 update FLUX LoRA training 2024-08-10 23:42:05 +09:00
Kohya S
36b2e6fc28 add FLUX.1 LoRA training 2024-08-09 22:56:48 +09:00
Kohya S
41dee60383 Refactor caching mechanism for latents and text encoder outputs, etc. 2024-07-27 13:50:05 +09:00
Kohya S
4dbcef429b update for corner cases 2024-06-04 21:26:55 +09:00
Kohaku-Blueleaf
3eb27ced52 Skip the final 1 step 2024-05-31 12:24:15 +08:00
Kohaku-Blueleaf
b2363f1021 Final implementation 2024-05-31 12:20:20 +08:00
Kohya S
da6fea3d97 simplify and update alpha mask to work with various cases 2024-05-19 21:26:18 +09:00
u-haru
db6752901f 画像のアルファチャンネルをlossのマスクとして使用するオプションを追加 (#1223)
* Add alpha_mask parameter and apply masked loss

* Fix type hint in trim_and_resize_if_required function

* Refactor code to use keyword arguments in train_util.py

* Fix alpha mask flipping logic

* Fix alpha mask initialization

* Fix alpha_mask transformation

* Cache alpha_mask

* Update alpha_masks to be on CPU

* Set flipped_alpha_masks to Null if option disabled

* Check if alpha_mask is None

* Set alpha_mask to None if option disabled

* Add description of alpha_mask option to docs
2024-05-19 19:07:25 +09:00
Kohya S
c68baae480 add --log_config option to enable/disable output training config 2024-05-19 17:21:04 +09:00
Kohya S
47187f7079 Merge pull request #1285 from ccharest93/main
Hyperparameter tracking
2024-05-19 16:31:33 +09:00
Kohya S
52e64c69cf add debug log 2024-05-04 18:43:52 +09:00
Kohya S
58c2d856ae support block dim/lr for sdxl 2024-05-03 22:18:20 +09:00