alefh123
57757c08d5
Merge 68877e789b into 206adb6438
2025-09-30 09:54:22 +05:30
Kohya S.
206adb6438
Merge pull request #2216 from kohya-ss/fix-sdxl-textual-inversion-training-disable-mmap
...
fix: disable_mmap_safetensors not defined in SDXL TI training
2025-09-29 20:55:02 +09:00
Kohya S
60bfa97b19
fix: disable_mmap_safetensors not defined in SDXL TI training
2025-09-29 20:52:48 +09:00
Kohya S.
1470cb8508
Merge pull request #2185 from kohya-ss/update-gitignore-for-ai-agent-and-testing
...
gitignore: add CLAUDE.md, GEMINI.md, MagicMock and related files
2025-08-24 17:09:05 +09:00
Kohya S
69a85a0a11
gitignore: add CLAUDE.md, GEMINI.md, MagicMock and related files to ignore list
2025-08-24 17:08:08 +09:00
Kohya S.
14ee64823f
Merge pull request #2184 from kohya-ss/add-sponsor-logo
...
doc: add sponsor logo and annoucements
2025-08-24 17:05:51 +09:00
kohya-ss
0e7f7808b0
doc: add sponsor logo and annoucements
2025-08-23 18:44:23 +09:00
Kohya S.
2857f21abf
Merge pull request #2175 from woct0rdho/resize-control-lora
...
Resize ControlLoRA
2025-08-15 19:03:08 +09:00
woctordho
3ad71e1acf
Refactor to avoid mutable global variable
2025-08-15 11:24:51 +08:00
woctordho
c6fab554f4
Support resizing ControlLoRA
2025-08-15 03:00:40 +08:00
Symbiomatrix
3ce0c6e71f
Fix.
2025-08-15 02:59:03 +08:00
Symbiomatrix
63ec59fc0b
Support for multiple format loras.
2025-08-15 02:59:03 +08:00
Kohya S.
a21b6a917e
Merge pull request #2070 from kohya-ss/fix-mean-ar-error-nan
...
Fix mean image aspect ratio error calculation to avoid NaN values
2025-04-29 21:29:42 +09:00
Kohya S
4625b34f4e
Fix mean image aspect ratio error calculation to avoid NaN values
2025-04-29 21:27:04 +09:00
Kohya S.
3b25de1f17
Merge pull request #2065 from kohya-ss/kohya-ss-funding-yml
...
Create FUNDING.yml
2025-04-27 21:29:44 +09:00
Kohya S.
f0b07c52ab
Create FUNDING.yml
2025-04-27 21:28:38 +09:00
Kohya S.
52c8dec953
Merge pull request #2015 from DKnight54/uncache_vae_batch
...
Using --vae_batch_size to set batch size for dynamic latent generation
2025-04-07 21:48:02 +09:00
DKnight54
381303d64f
Update train_network.py
2025-03-29 02:26:18 +08:00
Kohya S.
a0f11730f7
Merge pull request #1966 from sdbds/faster_fix_sdxl
...
Fatser fix bug for SDXL super SD1.5 assert cant use 32
2025-03-24 21:53:42 +09:00
Kohya S
5253a38783
Merge branch 'main' into dev
2025-03-21 22:07:03 +09:00
Kohya S
8f4ee8fc34
doc: update README for latest
v0.9.1
2025-03-21 22:05:48 +09:00
Kohya S.
367f348430
Merge pull request #1964 from Nekotekina/main
...
Fix missing text encoder attn modules
2025-03-21 21:59:03 +09:00
sdbds
3f49053c90
fatser fix bug for SDXL super SD1.5 assert cant use 32
2025-03-02 19:32:06 +08:00
Ivan Chikish
acdca2abb7
Fix [occasionally] missing text encoder attn modules
...
Should fix #1952
I added alternative name for CLIPAttention.
I have no idea why this name changed.
Now it should accept both names.
2025-03-01 20:35:45 +03:00
Kohya S.
b286304e5f
Merge pull request #1953 from Disty0/dev
...
Update IPEX libs
2025-02-26 21:03:09 +09:00
Disty0
f68702f71c
Update IPEX libs
2025-02-25 21:27:41 +03:00
Kohya S.
386b7332c6
Merge pull request #1918 from tsukimiya/fix_vperd_warning
...
Remove v-pred warning.
2025-02-24 18:55:25 +09:00
Kohya S.
59ae9ea20c
Merge pull request #1945 from yidiq7/dev
...
Remove position_ids for V2
2025-02-24 18:53:46 +09:00
Yidi
13df47516d
Remove position_ids for V2
...
The postions_ids cause errors for the newer version of transformer.
This has already been fixed in convert_ldm_clip_checkpoint_v1() but
not in v2.
The new code applies the same fix to convert_ldm_clip_checkpoint_v2().
2025-02-20 04:49:51 -05:00
tsukimiya
4a71687d20
不要な警告の削除
...
(おそらく be14c06267 の修正漏れ )
2025-02-04 00:42:27 +09:00
alefh123
68877e789b
Optimize Latent Caching Speed with VAE Optimizations
...
PR Summary:
This PR accelerates latent caching, a slow preprocessing step, by optimizing the VAE's encoding process.
Key Changes:
Mixed Precision Caching: VAE encoding now uses FP16 (or BF16) during latent caching for faster computation and reduced memory use.
Channels-Last VAE: VAE is temporarily switched to channels_last memory format during caching to improve GPU performance.
--vae_batch_size Utilization: This leverages the existing --vae_batch_size option; users should increase it for further speedups.
Benefits:
Significantly Faster Latent Caching: Reduces preprocessing time.
Improved GPU Efficiency: Optimizes VAE encoding on GPUs.
Impact: Faster training setup due to quicker latent caching.
This is much more concise and directly highlights the essential changes and their impact. Let me know if you would like it even shorter or with any other adjustments!
Based on the optimizations implemented—mixed precision and channels-last format for the VAE during caching—a speedup of 2x to 4x is a reasonable estimate.
2025-01-29 16:56:34 +02:00
Kohya S.
6e3c1d0b58
Merge pull request #1879 from kohya-ss/dev
...
merge dev to main
v0.9.0
2025-01-17 23:25:56 +09:00
Kohya S
345daaa986
update README for merging
2025-01-17 23:22:38 +09:00
Kohya S
6adb69be63
Merge branch 'main' into dev
2024-11-07 21:42:44 +09:00
Kohya S
e5ac095749
add about dev and sd3 branch to README
v0.8.8
2024-11-07 21:39:47 +09:00
Kohya S
e070bd9973
Merge branch 'main' into dev
2024-10-27 10:19:55 +09:00
Kohya S
ca44e3e447
reduce VRAM usage, instead of increasing main RAM usage
2024-10-27 10:19:05 +09:00
Kohya S
900d551a6a
Merge branch 'main' into dev
2024-10-26 22:02:36 +09:00
Kohya S
56b4ea963e
Fix LoRA metadata hash calculation bug in svd_merge_lora.py, sdxl_merge_lora.py, and resize_lora.py closes #1722
2024-10-26 22:01:10 +09:00
Kohya S
b1e6504007
update README
2024-10-25 18:56:25 +09:00
Kohya S.
b8ae745d0c
Merge pull request #1717 from catboxanon/fix/remove-vpred-warnings
...
Remove v-pred warnings
2024-10-25 18:49:40 +09:00
Kohya S.
c632af860e
Merge pull request #1715 from catboxanon/vpred-ztsnr-fixes
...
Update debiased estimation loss function to accommodate V-pred
2024-10-25 18:48:14 +09:00
catboxanon
be14c06267
Remove v-pred warnings
...
Different model architectures, such as SDXL, can take advantage of
v-pred. It doesn't make sense to include these warnings anymore.
2024-10-22 12:13:51 -04:00
catboxanon
0e7c592933
Remove scale_v_pred_loss_like_noise_pred deprecation
...
https://github.com/kohya-ss/sd-scripts/pull/1715#issuecomment-2427876376
2024-10-22 11:19:34 -04:00
catboxanon
e1b63c2249
Only add warning for deprecated scaling vpred loss function
2024-10-21 08:12:53 -04:00
catboxanon
8fc30f8205
Fix training for V-pred and ztSNR
...
1) Updates debiased estimation loss function for V-pred.
2) Prevents now-deprecated scaling of loss if ztSNR is enabled.
2024-10-21 07:34:33 -04:00
Kohya S
012e7e63a5
fix to work linear/cosine scheduler closes #1651 ref #1393
2024-09-29 23:18:16 +09:00
Kohya S
1567549220
update help text #1632
2024-09-29 09:51:36 +09:00
Kohya S
fe2aa32484
adjust min/max bucket reso divisible by reso steps #1632
2024-09-29 09:49:25 +09:00
Kohya S
ce49ced699
update readme
2024-09-26 21:37:40 +09:00