Kohya S
|
50b53e183e
|
re-organize import
|
2023-07-23 13:33:02 +09:00 |
|
青龍聖者@bdsqlsz
|
d131bde183
|
Support for bitsandbytes 0.39.1 with Paged Optimizer(AdamW8bit and Lion8bit) (#631)
* ADD libbitsandbytes.dll for 0.38.1
* Delete libbitsandbytes_cuda116.dll
* Delete cextension.py
* add main.py
* Update requirements.txt for bitsandbytes 0.38.1
* Update README.md for bitsandbytes-windows
* Update README-ja.md for bitsandbytes 0.38.1
* Update main.py for return cuda118
* Update train_util.py for lion8bit
* Update train_README-ja.md for lion8bit
* Update train_util.py for add DAdaptAdan and DAdaptSGD
* Update train_util.py for DAdaptadam
* Update train_network.py for dadapt
* Update train_README-ja.md for DAdapt
* Update train_util.py for DAdapt
* Update train_network.py for DAdaptAdaGrad
* Update train_db.py for DAdapt
* Update fine_tune.py for DAdapt
* Update train_textual_inversion.py for DAdapt
* Update train_textual_inversion_XTI.py for DAdapt
* Revert "Merge branch 'qinglong' into main"
This reverts commit b65c023083, reversing
changes made to f6fda20caf.
* Revert "Update requirements.txt for bitsandbytes 0.38.1"
This reverts commit 83abc60dfa.
* Revert "Delete cextension.py"
This reverts commit 3ba4dfe046.
* Revert "Update README.md for bitsandbytes-windows"
This reverts commit 4642c52086.
* Revert "Update README-ja.md for bitsandbytes 0.38.1"
This reverts commit fa6d7485ac.
* Update train_util.py
* Update requirements.txt
* support PagedAdamW8bit/PagedLion8bit
* Update requirements.txt
* update for PageAdamW8bit and PagedLion8bit
* Revert
* revert main
|
2023-07-22 19:45:32 +09:00 |
|
Kohya S
|
73a08c0be0
|
Merge pull request #630 from ddPn08/sdxl
make tracker init_kwargs configurable
|
2023-07-20 22:05:55 +09:00 |
|
Kohya S
|
acf16c063a
|
make to work with PyTorch 1.12
|
2023-07-20 21:41:16 +09:00 |
|
Kohya S
|
fc276a51fb
|
fix invalid args checking in sdxl TI training
|
2023-07-20 14:50:57 +09:00 |
|
Kohya S
|
225e871819
|
enable full bf16 trainint in train_network
|
2023-07-19 08:41:42 +09:00 |
|
Kohya S
|
6d2d8dfd2f
|
add zero_terminal_snr option
|
2023-07-18 23:17:23 +09:00 |
|
Kohya S
|
0ec7166098
|
make crop top/left same as stabilityai's prep
|
2023-07-18 21:39:36 +09:00 |
|
Kohya S
|
41d195715d
|
fix scheduler steps with gradient accumulation
|
2023-07-16 15:56:29 +09:00 |
|
Kohya S
|
516f64f4d9
|
add caching to disk for text encoder outputs
|
2023-07-16 14:53:47 +09:00 |
|
Kohya S
|
94c151aea3
|
refactor caching latents (flip in same npz, etc)
|
2023-07-15 18:28:33 +09:00 |
|
Kohya S
|
81fa54837f
|
fix sampling in multi GPU training
|
2023-07-15 11:21:14 +09:00 |
|
Kohya S
|
9de357e373
|
fix tokenizer 2 is not same as open clip tokenizer
|
2023-07-14 12:27:19 +09:00 |
|
Kohya S
|
b4a3824ce4
|
change tokenizer from open clip to transformers
|
2023-07-13 20:49:26 +09:00 |
|
Kohya S
|
3bb80ebf20
|
fix sampling gen fails in lora training
|
2023-07-13 19:02:34 +09:00 |
|
Kohya S
|
8fa5fb2816
|
support diffusers format for SDXL
|
2023-07-12 21:57:14 +09:00 |
|
Kohya S
|
8df948565a
|
remove unnecessary code
|
2023-07-12 21:53:02 +09:00 |
|
Kohya S
|
814996b14f
|
fix NaN in sampling image
|
2023-07-11 23:18:35 +09:00 |
|
ddPn08
|
b841dd78fe
|
make tracker init_kwargs configurable
|
2023-07-11 10:21:45 +09:00 |
|
Kohya S
|
f54b784d88
|
support textual inversion training
|
2023-07-10 22:04:02 +09:00 |
|
Kohya S
|
b6e328ea8f
|
don't hold latent on memory for finetuning dataset
|
2023-07-10 08:46:15 +09:00 |
|
Kohya S
|
c2ceb6de5f
|
fix uncond/cond order
|
2023-07-09 21:14:12 +09:00 |
|
Kohya S
|
77ec70d145
|
fix conditioning
|
2023-07-09 19:00:38 +09:00 |
|
Kohya S
|
a380502c01
|
fix pad token is not handled
|
2023-07-09 18:13:49 +09:00 |
|
Kohya S
|
0416f26a76
|
support multi gpu in caching text encoder outputs
|
2023-07-09 16:02:56 +09:00 |
|
Kohya S
|
3579b4570f
|
Merge pull request #628 from KohakuBlueleaf/full_bf16
Full bf16 support
|
2023-07-09 14:22:44 +09:00 |
|
Kohya S
|
256ff5b56c
|
Merge pull request #626 from ddPn08/sdxl
support avif
|
2023-07-09 14:14:28 +09:00 |
|
Kohaku-Blueleaf
|
d974959738
|
Update train_util.py for full_bf16 support
|
2023-07-09 12:47:26 +08:00 |
|
ykume
|
1d25703ac3
|
add generation script
|
2023-07-09 13:33:26 +09:00 |
|
ykume
|
fe7ede5af3
|
fix wrapper tokenizer not work for weighted prompt
|
2023-07-09 13:33:16 +09:00 |
|
ddPn08
|
d599394f60
|
support avif
|
2023-07-08 15:47:56 +09:00 |
|
Kohya S
|
cc3d40ca44
|
support sdxl in prepare scipt
|
2023-07-07 21:16:41 +09:00 |
|
Kohya S
|
4a34e5804e
|
fix to work with .ckpt from comfyui
|
2023-07-05 21:55:43 +09:00 |
|
Kohya S
|
3d0375daa6
|
fix to work sdxl state dict without logit_scale
|
2023-07-05 21:45:30 +09:00 |
|
Kohya S
|
3060eb5baf
|
remove debug print
|
2023-07-05 21:44:46 +09:00 |
|
Kohya S
|
ce46aa0c3b
|
remove debug print
|
2023-07-04 21:34:18 +09:00 |
|
Kohya S
|
2febbfe4b0
|
add error message for old npz
|
2023-07-03 20:58:35 +09:00 |
|
Kohya S
|
ea182461d3
|
add min/max_timestep
|
2023-07-03 20:44:42 +09:00 |
|
Kohya S
|
97611e89ca
|
remove debug code
|
2023-07-02 16:49:11 +09:00 |
|
Kohya S
|
64cf922841
|
add feature to sample images during sdxl training
|
2023-07-02 16:42:19 +09:00 |
|
Kohya S
|
d395bc0647
|
fix max_token_length not works for sdxl
|
2023-06-29 13:02:19 +09:00 |
|
Kohya S
|
71a6d49d06
|
fix to work train_network with fine-tuning dataset
|
2023-06-28 07:50:53 +09:00 |
|
Kohya S
|
a751dc25d6
|
use CLIPTextModelWithProjection
|
2023-06-27 20:48:06 +09:00 |
|
Kohya S
|
9ebebb22db
|
fix typos
|
2023-06-26 20:43:34 +09:00 |
|
Kohya S
|
2c461e4ad3
|
Add no_half_vae for SDXL training, add nan check
|
2023-06-26 20:38:09 +09:00 |
|
Kohya S
|
747af145ed
|
add sdxl fine-tuning and LoRA
|
2023-06-26 08:07:24 +09:00 |
|
Kohya S
|
9e9df2b501
|
update dataset to return size, refactor ctrlnet ds
|
2023-06-24 17:56:02 +09:00 |
|
Kohya S
|
f7f762c676
|
add minimal inference code for sdxl
|
2023-06-24 11:52:26 +09:00 |
|
Kohya S
|
0b730d904f
|
Merge branch 'original-u-net' into sdxl
|
2023-06-24 09:37:00 +09:00 |
|
Kohya S
|
11e8c7d8ff
|
fix to work controlnet training
|
2023-06-24 09:35:33 +09:00 |
|