rockerBOO
|
f62c68df3c
|
Make grad_norm and combined_grad_norm None is not recording
|
2025-05-01 01:37:57 -04:00 |
|
Kohya S.
|
59d98e45a9
|
Merge pull request #1974 from rockerBOO/lora-ggpo
Add LoRA-GGPO for Flux
|
2025-03-30 21:07:31 +09:00 |
|
rockerBOO
|
182544dcce
|
Remove pertubation seed
|
2025-03-26 14:23:04 -04:00 |
|
Kohya S
|
6364379f17
|
Merge branch 'dev' into sd3
|
2025-03-21 22:07:50 +09:00 |
|
rockerBOO
|
3647d065b5
|
Cache weight norms estimate on initialization. Move to update norms every step
|
2025-03-18 14:25:09 -04:00 |
|
rockerBOO
|
ea53290f62
|
Add LoRA-GGPO for Flux
|
2025-03-06 00:00:38 -05:00 |
|
Ivan Chikish
|
acdca2abb7
|
Fix [occasionally] missing text encoder attn modules
Should fix #1952
I added alternative name for CLIPAttention.
I have no idea why this name changed.
Now it should accept both names.
|
2025-03-01 20:35:45 +03:00 |
|
Dango233
|
e54462a4a9
|
Fix SD3 trained lora loading and merging
|
2024-11-07 09:54:12 +00:00 |
|
kohya-ss
|
b502f58488
|
Fix emb_dim to work.
|
2024-10-29 23:29:50 +09:00 |
|
kohya-ss
|
ce5b532582
|
Fix additional LoRA to work
|
2024-10-29 22:29:24 +09:00 |
|
kohya-ss
|
0af4edd8a6
|
Fix split_qkv
|
2024-10-29 21:51:56 +09:00 |
|
Kohya S
|
b649bbf2b6
|
Merge branch 'sd3' into sd3_5_support
|
2024-10-27 10:20:35 +09:00 |
|
Kohya S
|
731664b8c3
|
Merge branch 'dev' into sd3
|
2024-10-27 10:20:14 +09:00 |
|
Kohya S
|
e070bd9973
|
Merge branch 'main' into dev
|
2024-10-27 10:19:55 +09:00 |
|
Kohya S
|
ca44e3e447
|
reduce VRAM usage, instead of increasing main RAM usage
|
2024-10-27 10:19:05 +09:00 |
|
Kohya S
|
150579db32
|
Merge branch 'sd3' into sd3_5_support
|
2024-10-26 22:03:41 +09:00 |
|
Kohya S
|
8549669f89
|
Merge branch 'dev' into sd3
|
2024-10-26 22:02:57 +09:00 |
|
Kohya S
|
900d551a6a
|
Merge branch 'main' into dev
|
2024-10-26 22:02:36 +09:00 |
|
Kohya S
|
56b4ea963e
|
Fix LoRA metadata hash calculation bug in svd_merge_lora.py, sdxl_merge_lora.py, and resize_lora.py closes #1722
|
2024-10-26 22:01:10 +09:00 |
|
kohya-ss
|
d2c549d7b2
|
support SD3 LoRA
|
2024-10-25 21:58:31 +09:00 |
|
Kohya S
|
822fe57859
|
add workaround for 'Some tensors share memory' error #1614
|
2024-09-28 20:57:27 +09:00 |
|
Kohya S
|
706a48d50e
|
Merge branch 'dev' into sd3
|
2024-09-19 21:15:39 +09:00 |
|
Kohya S
|
d7e14721e2
|
Merge branch 'main' into dev
|
2024-09-19 21:15:19 +09:00 |
|
Kohya S
|
9c757c2fba
|
fix SDXL block index to match LBW
|
2024-09-19 21:14:57 +09:00 |
|
Kohya S
|
0cbe95bcc7
|
fix text_encoder_lr to work with int closes #1608
|
2024-09-17 21:21:28 +09:00 |
|
Kohya S
|
d8d15f1a7e
|
add support for specifying blocks in FLUX.1 LoRA training
|
2024-09-16 23:14:09 +09:00 |
|
Kohya S
|
c9ff4de905
|
Add support for specifying rank for each layer in FLUX.1
|
2024-09-14 22:17:52 +09:00 |
|
Kohya S
|
2d8ee3c280
|
OFT for FLUX.1
|
2024-09-14 15:48:16 +09:00 |
|
Kohya S
|
0485f236a0
|
Merge branch 'dev' into sd3
|
2024-09-13 22:39:24 +09:00 |
|
Kohya S
|
93d9fbf607
|
improve OFT implementation closes #944
|
2024-09-13 22:37:11 +09:00 |
|
Kohya S
|
c15a3a1a65
|
Merge branch 'dev' into sd3
|
2024-09-13 21:30:49 +09:00 |
|
Kohya S
|
43ad73860d
|
Merge branch 'main' into dev
|
2024-09-13 21:29:51 +09:00 |
|
Kohya S
|
b755ebd0a4
|
add LBW support for SDXL merge LoRA
|
2024-09-13 21:29:31 +09:00 |
|
Kohya S
|
f4a0bea6dc
|
format by black
|
2024-09-13 21:26:06 +09:00 |
|
terracottahaniwa
|
734d2e5b2b
|
Support Lora Block Weight (LBW) to svd_merge_lora.py (#1575)
* support lora block weight
* solve license incompatibility
* Fix issue: lbw index calculation
|
2024-09-13 20:45:35 +09:00 |
|
Kohya S
|
f3ce80ef8f
|
Merge branch 'dev' into sd3
|
2024-09-13 19:49:16 +09:00 |
|
Kohya S
|
9d2860760d
|
Merge branch 'main' into dev
|
2024-09-13 19:48:53 +09:00 |
|
Kohya S
|
3387dc7306
|
formatting, update README
|
2024-09-13 19:45:42 +09:00 |
|
Kohya S
|
57ae44eb61
|
refactor to make safer
|
2024-09-13 19:45:00 +09:00 |
|
Maru-mee
|
1d7118a622
|
Support : OFT merge to base model (#1580)
* Support : OFT merge to base model
* Fix typo
* Fix typo_2
* Delete unused parameter 'eye'
|
2024-09-13 19:01:36 +09:00 |
|
Kohya S
|
cefe52629e
|
fix to work old notation for TE LR in .toml
|
2024-09-12 12:36:07 +09:00 |
|
Kohya S
|
d10ff62a78
|
support individual LR for CLIP-L/T5XXL
|
2024-09-10 20:32:09 +09:00 |
|
Kohya S
|
90ed2dfb52
|
feat: Add support for merging CLIP-L and T5XXL LoRA models
|
2024-09-05 08:39:29 +09:00 |
|
Kohya S
|
56cb2fc885
|
support T5XXL LoRA, reduce peak memory usage #1560
|
2024-09-04 23:15:27 +09:00 |
|
Kohya S
|
b65ae9b439
|
T5XXL LoRA training, fp8 T5XXL support
|
2024-09-04 21:33:17 +09:00 |
|
Kohya S
|
3be712e3e0
|
feat: Update direct loading fp8 ckpt for LoRA training
|
2024-08-27 21:40:02 +09:00 |
|
Kohya S
|
0087a46e14
|
FLUX.1 LoRA supports CLIP-L
|
2024-08-27 19:59:40 +09:00 |
|
Kohya S
|
5639c2adc0
|
fix typo
|
2024-08-24 16:37:49 +09:00 |
|
Kohya S
|
cf689e7aa6
|
feat: Add option to split projection layers and apply LoRA
|
2024-08-24 16:35:43 +09:00 |
|
Kohya S
|
99744af53a
|
Merge branch 'dev' into sd3
|
2024-08-22 21:34:24 +09:00 |
|