mirror of
https://github.com/kohya-ss/sd-scripts.git
synced 2026-04-09 06:45:09 +00:00
Add Paged/ adam8bit/lion8bit for Sdxl bitsandbytes 0.39.1 cuda118 on windows (#623)
* ADD libbitsandbytes.dll for 0.38.1 * Delete libbitsandbytes_cuda116.dll * Delete cextension.py * add main.py * Update requirements.txt for bitsandbytes 0.38.1 * Update README.md for bitsandbytes-windows * Update README-ja.md for bitsandbytes 0.38.1 * Update main.py for return cuda118 * Update train_util.py for lion8bit * Update train_README-ja.md for lion8bit * Update train_util.py for add DAdaptAdan and DAdaptSGD * Update train_util.py for DAdaptadam * Update train_network.py for dadapt * Update train_README-ja.md for DAdapt * Update train_util.py for DAdapt * Update train_network.py for DAdaptAdaGrad * Update train_db.py for DAdapt * Update fine_tune.py for DAdapt * Update train_textual_inversion.py for DAdapt * Update train_textual_inversion_XTI.py for DAdapt * Revert "Merge branch 'qinglong' into main" This reverts commitb65c023083, reversing changes made tof6fda20caf. * Revert "Update requirements.txt for bitsandbytes 0.38.1" This reverts commit83abc60dfa. * Revert "Delete cextension.py" This reverts commit3ba4dfe046. * Revert "Update README.md for bitsandbytes-windows" This reverts commit4642c52086. * Revert "Update README-ja.md for bitsandbytes 0.38.1" This reverts commitfa6d7485ac. * Update train_util.py for DAdaptLion * Update train_README-zh.md for dadaptlion * Update train_README-ja.md for DAdaptLion * add DAdatpt V3 * Alignment * Update train_util.py for experimental * Update train_util.py V3 * Update train_util.py * Update requirements.txt * Update train_README-zh.md * Update train_README-ja.md * Update train_util.py fix * Update train_util.py * support Prodigy * add lower * Update main.py * support PagedAdamW8bit/PagedLion8bit * Update requirements.txt * update for PageAdamW8bit and PagedLion8bit * Revert * revert main * Update train_util.py * update for bitsandbytes 0.39.1 * Update requirements.txt * vram leak fix --------- Co-authored-by: Pam <pamhome21@gmail.com>
This commit is contained in:
@@ -2165,6 +2165,8 @@ def cache_batch_latents(
|
||||
info.latents = latent
|
||||
if flip_aug:
|
||||
info.latents_flipped = flipped_latent
|
||||
if torch.cuda.is_available():
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
|
||||
def cache_batch_text_encoder_outputs(
|
||||
|
||||
Reference in New Issue
Block a user