Commit Graph

1547 Commits

Author SHA1 Message Date
Kohya S
c1d5c24bc7 fix LoRA with text encoder can't merge closes #660 2023-07-23 15:01:41 +09:00
Isotr0py
eec6aaddda fix safetensors error: device invalid 2023-07-23 13:29:29 +08:00
Isotr0py
bb167f94ca init unet with empty weights 2023-07-23 13:17:11 +08:00
Kohya S
2e4783bcdf Merge branch 'main' into sdxl 2023-07-23 13:53:13 +09:00
Kohya S
7b31c0830f Merge pull request #663 from DingSiuyo/main
fixed some Chinese translation errors
2023-07-23 13:52:32 +09:00
Kohya S
8f645d354e Merge pull request #615 from shirayu/patch-1
Fix a typo
2023-07-23 13:34:48 +09:00
Kohya S
7ec9a7af79 support Diffusers format 2023-07-23 13:33:14 +09:00
Kohya S
50b53e183e re-organize import 2023-07-23 13:33:02 +09:00
青龍聖者@bdsqlsz
d131bde183 Support for bitsandbytes 0.39.1 with Paged Optimizer(AdamW8bit and Lion8bit) (#631)
* ADD libbitsandbytes.dll for 0.38.1

* Delete libbitsandbytes_cuda116.dll

* Delete cextension.py

* add main.py

* Update requirements.txt for bitsandbytes 0.38.1

* Update README.md for bitsandbytes-windows

* Update README-ja.md  for bitsandbytes 0.38.1

* Update main.py for return cuda118

* Update train_util.py for lion8bit

* Update train_README-ja.md for lion8bit

* Update train_util.py for add DAdaptAdan and DAdaptSGD

* Update train_util.py for DAdaptadam

* Update train_network.py for dadapt

* Update train_README-ja.md for DAdapt

* Update train_util.py for DAdapt

* Update train_network.py for DAdaptAdaGrad

* Update train_db.py for DAdapt

* Update fine_tune.py for DAdapt

* Update train_textual_inversion.py for DAdapt

* Update train_textual_inversion_XTI.py for DAdapt

* Revert "Merge branch 'qinglong' into main"

This reverts commit b65c023083, reversing
changes made to f6fda20caf.

* Revert "Update requirements.txt for bitsandbytes 0.38.1"

This reverts commit 83abc60dfa.

* Revert "Delete cextension.py"

This reverts commit 3ba4dfe046.

* Revert "Update README.md for bitsandbytes-windows"

This reverts commit 4642c52086.

* Revert "Update README-ja.md  for bitsandbytes 0.38.1"

This reverts commit fa6d7485ac.

* Update train_util.py

* Update requirements.txt

* support PagedAdamW8bit/PagedLion8bit

* Update requirements.txt

* update for PageAdamW8bit and PagedLion8bit

* Revert

* revert main
2023-07-22 19:45:32 +09:00
Kohya S
d1864e2430 add invisible watermark to req.txt 2023-07-22 19:34:22 +09:00
Kohya S
8ba02ac829 fix to work text encoder only network with bf16 2023-07-22 09:56:36 +09:00
Kohya S
73a08c0be0 Merge pull request #630 from ddPn08/sdxl
make tracker init_kwargs configurable
2023-07-20 22:05:55 +09:00
Kohya S
c45d2f214b Merge branch 'main' into sdxl 2023-07-20 22:02:29 +09:00
Kohya S
9a67e0df39 Merge pull request #610 from lubobill1990/patch-1
Update huggingface hub to resolve error in windows
v0.6.5
2023-07-20 21:45:38 +09:00
Kohya S
acf16c063a make to work with PyTorch 1.12 2023-07-20 21:41:16 +09:00
Kohya S
86a8cbd002 fix original w/h prompt opt shows wrong number 2023-07-20 14:52:04 +09:00
Kohya S
fc276a51fb fix invalid args checking in sdxl TI training 2023-07-20 14:50:57 +09:00
Kohya S
771f33d17d Merge pull request #641 from kaibioinfo/patch-1
fix typo in sdxl_train_textual_inversion
2023-07-20 08:28:11 +09:00
DingSiuyo
e6d1f509a0 fixed some translation errors 2023-07-19 04:30:37 +00:00
Kohya S
225e871819 enable full bf16 trainint in train_network 2023-07-19 08:41:42 +09:00
Kohya S
7875ca8fb5 Merge pull request #645 from Ttl/prepare_order
Cast weights to correct precision before transferring them to GPU
2023-07-19 08:33:32 +09:00
Kohya S
6d2d8dfd2f add zero_terminal_snr option 2023-07-18 23:17:23 +09:00
Kohya S
0ec7166098 make crop top/left same as stabilityai's prep 2023-07-18 21:39:36 +09:00
Kohya S
3d66a234b0 enable different prompt for text encoders 2023-07-18 21:39:01 +09:00
Pam
8a073ee49f vram leak fix 2023-07-17 17:51:26 +05:00
Kohya S
7e20c6d1a1 add convenience function to merge LoRA 2023-07-17 10:30:57 +09:00
Kohya S
1d4672d747 fix typos 2023-07-17 09:05:50 +09:00
Kohya S
39e62b948e add lora for Diffusers 2023-07-16 19:57:21 +09:00
Kohya S
41d195715d fix scheduler steps with gradient accumulation 2023-07-16 15:56:29 +09:00
Kohya S
3db97f8897 update readme 2023-07-16 15:14:49 +09:00
Kohya S
516f64f4d9 add caching to disk for text encoder outputs 2023-07-16 14:53:47 +09:00
Kohya S
62dd99bee5 update readme 2023-07-15 18:34:13 +09:00
Kohya S
94c151aea3 refactor caching latents (flip in same npz, etc) 2023-07-15 18:28:33 +09:00
Kohya S
81fa54837f fix sampling in multi GPU training 2023-07-15 11:21:14 +09:00
Kohya S
9de357e373 fix tokenizer 2 is not same as open clip tokenizer 2023-07-14 12:27:19 +09:00
Kohya S
b4a3824ce4 change tokenizer from open clip to transformers 2023-07-13 20:49:26 +09:00
Kohya S
3bb80ebf20 fix sampling gen fails in lora training 2023-07-13 19:02:34 +09:00
Henrik Forstén
cdffd19f61 Cast weights to correct precision before transferring them to GPU 2023-07-13 12:45:28 +03:00
kaibioinfo
a7ce2633f3 fix typo in sdxl_train_textual_inversion
bug appears when continue training on an existing TI
2023-07-12 15:06:20 +02:00
Kohya S
8fa5fb2816 support diffusers format for SDXL 2023-07-12 21:57:14 +09:00
Kohya S
8df948565a remove unnecessary code 2023-07-12 21:53:02 +09:00
Kohya S
3c67e595b8 fix gradient accumulation doesn't work 2023-07-12 21:35:57 +09:00
Kohya S
814996b14f fix NaN in sampling image 2023-07-11 23:18:35 +09:00
Kohya S
2e67d74df4 add no_half_vae option 2023-07-11 22:19:14 +09:00
ddPn08
b841dd78fe make tracker init_kwargs configurable 2023-07-11 10:21:45 +09:00
Kohya S
68ca0ea995 Fix to show template type 2023-07-10 22:28:26 +09:00
Kohya S
f54b784d88 support textual inversion training 2023-07-10 22:04:02 +09:00
Kohya S
b6e328ea8f don't hold latent on memory for finetuning dataset 2023-07-10 08:46:15 +09:00
Kohya S
5c80117fbd update readme 2023-07-09 21:37:46 +09:00
Kohya S
c2ceb6de5f fix uncond/cond order 2023-07-09 21:14:12 +09:00