alefh123
ba9db44649
Merge 68877e789b into a21b6a917e
2025-04-29 20:51:00 +08:00
Kohya S.
a21b6a917e
Merge pull request #2070 from kohya-ss/fix-mean-ar-error-nan
...
Fix mean image aspect ratio error calculation to avoid NaN values
2025-04-29 21:29:42 +09:00
Kohya S
4625b34f4e
Fix mean image aspect ratio error calculation to avoid NaN values
2025-04-29 21:27:04 +09:00
Kohya S.
3b25de1f17
Merge pull request #2065 from kohya-ss/kohya-ss-funding-yml
...
Create FUNDING.yml
2025-04-27 21:29:44 +09:00
Kohya S.
f0b07c52ab
Create FUNDING.yml
2025-04-27 21:28:38 +09:00
Kohya S.
52c8dec953
Merge pull request #2015 from DKnight54/uncache_vae_batch
...
Using --vae_batch_size to set batch size for dynamic latent generation
2025-04-07 21:48:02 +09:00
DKnight54
381303d64f
Update train_network.py
2025-03-29 02:26:18 +08:00
Kohya S.
a0f11730f7
Merge pull request #1966 from sdbds/faster_fix_sdxl
...
Fatser fix bug for SDXL super SD1.5 assert cant use 32
2025-03-24 21:53:42 +09:00
Kohya S
5253a38783
Merge branch 'main' into dev
2025-03-21 22:07:03 +09:00
Kohya S
8f4ee8fc34
doc: update README for latest
v0.9.1
2025-03-21 22:05:48 +09:00
Kohya S.
367f348430
Merge pull request #1964 from Nekotekina/main
...
Fix missing text encoder attn modules
2025-03-21 21:59:03 +09:00
sdbds
3f49053c90
fatser fix bug for SDXL super SD1.5 assert cant use 32
2025-03-02 19:32:06 +08:00
Ivan Chikish
acdca2abb7
Fix [occasionally] missing text encoder attn modules
...
Should fix #1952
I added alternative name for CLIPAttention.
I have no idea why this name changed.
Now it should accept both names.
2025-03-01 20:35:45 +03:00
Kohya S.
b286304e5f
Merge pull request #1953 from Disty0/dev
...
Update IPEX libs
2025-02-26 21:03:09 +09:00
Disty0
f68702f71c
Update IPEX libs
2025-02-25 21:27:41 +03:00
Kohya S.
386b7332c6
Merge pull request #1918 from tsukimiya/fix_vperd_warning
...
Remove v-pred warning.
2025-02-24 18:55:25 +09:00
Kohya S.
59ae9ea20c
Merge pull request #1945 from yidiq7/dev
...
Remove position_ids for V2
2025-02-24 18:53:46 +09:00
Yidi
13df47516d
Remove position_ids for V2
...
The postions_ids cause errors for the newer version of transformer.
This has already been fixed in convert_ldm_clip_checkpoint_v1() but
not in v2.
The new code applies the same fix to convert_ldm_clip_checkpoint_v2().
2025-02-20 04:49:51 -05:00
tsukimiya
4a71687d20
不要な警告の削除
...
(おそらく be14c06267 の修正漏れ )
2025-02-04 00:42:27 +09:00
alefh123
68877e789b
Optimize Latent Caching Speed with VAE Optimizations
...
PR Summary:
This PR accelerates latent caching, a slow preprocessing step, by optimizing the VAE's encoding process.
Key Changes:
Mixed Precision Caching: VAE encoding now uses FP16 (or BF16) during latent caching for faster computation and reduced memory use.
Channels-Last VAE: VAE is temporarily switched to channels_last memory format during caching to improve GPU performance.
--vae_batch_size Utilization: This leverages the existing --vae_batch_size option; users should increase it for further speedups.
Benefits:
Significantly Faster Latent Caching: Reduces preprocessing time.
Improved GPU Efficiency: Optimizes VAE encoding on GPUs.
Impact: Faster training setup due to quicker latent caching.
This is much more concise and directly highlights the essential changes and their impact. Let me know if you would like it even shorter or with any other adjustments!
Based on the optimizations implemented—mixed precision and channels-last format for the VAE during caching—a speedup of 2x to 4x is a reasonable estimate.
2025-01-29 16:56:34 +02:00
Kohya S.
6e3c1d0b58
Merge pull request #1879 from kohya-ss/dev
...
merge dev to main
v0.9.0
2025-01-17 23:25:56 +09:00
Kohya S
345daaa986
update README for merging
2025-01-17 23:22:38 +09:00
Kohya S
6adb69be63
Merge branch 'main' into dev
2024-11-07 21:42:44 +09:00
Kohya S
e5ac095749
add about dev and sd3 branch to README
v0.8.8
2024-11-07 21:39:47 +09:00
Kohya S
e070bd9973
Merge branch 'main' into dev
2024-10-27 10:19:55 +09:00
Kohya S
ca44e3e447
reduce VRAM usage, instead of increasing main RAM usage
2024-10-27 10:19:05 +09:00
Kohya S
900d551a6a
Merge branch 'main' into dev
2024-10-26 22:02:36 +09:00
Kohya S
56b4ea963e
Fix LoRA metadata hash calculation bug in svd_merge_lora.py, sdxl_merge_lora.py, and resize_lora.py closes #1722
2024-10-26 22:01:10 +09:00
Kohya S
b1e6504007
update README
2024-10-25 18:56:25 +09:00
Kohya S.
b8ae745d0c
Merge pull request #1717 from catboxanon/fix/remove-vpred-warnings
...
Remove v-pred warnings
2024-10-25 18:49:40 +09:00
Kohya S.
c632af860e
Merge pull request #1715 from catboxanon/vpred-ztsnr-fixes
...
Update debiased estimation loss function to accommodate V-pred
2024-10-25 18:48:14 +09:00
catboxanon
be14c06267
Remove v-pred warnings
...
Different model architectures, such as SDXL, can take advantage of
v-pred. It doesn't make sense to include these warnings anymore.
2024-10-22 12:13:51 -04:00
catboxanon
0e7c592933
Remove scale_v_pred_loss_like_noise_pred deprecation
...
https://github.com/kohya-ss/sd-scripts/pull/1715#issuecomment-2427876376
2024-10-22 11:19:34 -04:00
catboxanon
e1b63c2249
Only add warning for deprecated scaling vpred loss function
2024-10-21 08:12:53 -04:00
catboxanon
8fc30f8205
Fix training for V-pred and ztSNR
...
1) Updates debiased estimation loss function for V-pred.
2) Prevents now-deprecated scaling of loss if ztSNR is enabled.
2024-10-21 07:34:33 -04:00
Kohya S
012e7e63a5
fix to work linear/cosine scheduler closes #1651 ref #1393
2024-09-29 23:18:16 +09:00
Kohya S
1567549220
update help text #1632
2024-09-29 09:51:36 +09:00
Kohya S
fe2aa32484
adjust min/max bucket reso divisible by reso steps #1632
2024-09-29 09:49:25 +09:00
Kohya S
ce49ced699
update readme
2024-09-26 21:37:40 +09:00
Kohya S
a94bc84dec
fix to work bitsandbytes optimizers with full path #1640
2024-09-26 21:37:31 +09:00
Kohya S.
4296e286b8
Merge pull request #1640 from sdbds/ademamix8bit
...
New optimizer:AdEMAMix8bit and PagedAdEMAMix8bit
2024-09-26 21:20:19 +09:00
Kohya S
bf91bea2e4
fix flip_aug, alpha_mask, random_crop issue in caching
2024-09-26 20:51:40 +09:00
sdbds
1beddd84e5
delete code for cleaning
2024-09-25 22:58:26 +08:00
Kohya S
e74f58148c
update README
2024-09-25 20:55:50 +09:00
Kohya S.
c1d16a76d6
Merge pull request #1628 from recris/huber-timesteps
...
Make timesteps work in the standard way when Huber loss is used
2024-09-25 20:52:55 +09:00
sdbds
ab7b231870
init
2024-09-25 19:38:52 +08:00
Kohya S
29177d2f03
retain alpha in pil_resize backport #1619
2024-09-23 21:14:03 +09:00
recris
e1f23af1bc
make timestep sampling behave in the standard way when huber loss is used
2024-09-21 13:21:56 +01:00
Kohya S.
0b7927e50b
Merge pull request #1615 from Maru-mee/patch-1
...
Bug fix: alpha_mask load
2024-09-19 21:49:49 +09:00
Kohya S
d7e14721e2
Merge branch 'main' into dev
2024-09-19 21:15:19 +09:00