Yuta Hayashibe
|
40d917b0fe
|
Removed incorrect comments
|
2023-10-29 21:02:44 +09:00 |
|
Yuta Hayashibe
|
cf876fcdb4
|
Accept --ss to set sample_sampler dynamically
|
2023-10-29 20:15:04 +09:00 |
|
Yuta Hayashibe
|
291c29caaf
|
Added a function line_to_prompt_dict() and removed duplicated initializations
|
2023-10-29 19:57:25 +09:00 |
|
Yuta Hayashibe
|
01e00ac1b0
|
Make a function get_my_scheduler()
|
2023-10-29 19:46:02 +09:00 |
|
Isotr0py
|
592014923f
|
Support JPEG-XL on windows
|
2023-10-04 21:48:25 +08:00 |
|
Yuta Hayashibe
|
27f9b6ffeb
|
updated typos to v1.16.15 and fix typos
|
2023-10-01 21:51:24 +09:00 |
|
Kohya S
|
360af27749
|
fix ControlNetDataset not working
|
2023-09-03 12:27:58 +09:00 |
|
Kohya S
|
0ee75fd75d
|
fix typos, add comments etc.
|
2023-09-03 12:24:15 +09:00 |
|
Kohya S
|
2eae9b66d0
|
Merge pull request #798 from vvern999/vvern999-patch-1
add input perturbation noise
|
2023-09-03 10:51:23 +09:00 |
|
Kohya S
|
948cf17499
|
add caption_prefix/suffix to dataset
|
2023-09-02 16:17:12 +09:00 |
|
Kohya S
|
497051c14b
|
Merge pull request #786 from Isotr0py/jxl
Support JPEG XL
|
2023-09-02 15:30:07 +09:00 |
|
vvern999
|
e0beb6a999
|
add input perturbation noise
from https://arxiv.org/abs/2301.11706
|
2023-09-02 07:33:27 +03:00 |
|
Kohya S
|
7e850f3b7e
|
Merge branch 'main' into sdxl
|
2023-09-01 07:59:26 +09:00 |
|
Isotr0py
|
5d88351bb5
|
support jpeg xl
|
2023-08-25 11:07:02 +08:00 |
|
Kohya S
|
1161a5c6da
|
fix debug_dataset for controlnet dataset
|
2023-08-20 17:39:48 +09:00 |
|
Kohya S
|
e191892824
|
fix bucketing doesn't work in controlnet training
|
2023-08-20 12:24:40 +09:00 |
|
Kohya S
|
3f7235c36f
|
add lora controlnet train/gen temporarily
|
2023-08-17 10:08:02 +09:00 |
|
Kohya S
|
3307ccb2dc
|
revert default noise offset to 0 (None) in sdxl
|
2023-08-11 20:35:46 +09:00 |
|
Kohya S
|
6889ee2b85
|
add warning for bucket_reso_steps with SDXL
|
2023-08-11 19:02:36 +09:00 |
|
Kohya S
|
e73d103eca
|
fix sample gen failed in sdxl training
|
2023-08-11 16:58:52 +09:00 |
|
Kohya S
|
daad50e384
|
fix to work when input_ids has multiple EOS tokens
|
2023-08-10 20:13:59 +09:00 |
|
Kohya S
|
6f80fe17fc
|
fix crashing in saving lora with clipskip
|
2023-08-08 21:03:16 +09:00 |
|
Kohya S
|
c142dadb46
|
support sai model spec
|
2023-08-06 21:50:05 +09:00 |
|
Kohya S
|
e5f9772a35
|
fix training textencoder in sdxl not working
|
2023-08-05 21:22:50 +09:00 |
|
reid3333
|
a02056c566
|
fix: load may fail if symbolic link points to relative path
|
2023-08-05 17:47:43 +09:00 |
|
Kohya S
|
9d855091bf
|
make bitsandbytes optional
|
2023-08-04 22:29:14 +09:00 |
|
Kohya S
|
f3be995c28
|
remove debug print
|
2023-08-04 08:44:17 +09:00 |
|
Kohya S
|
9d7619d1eb
|
remove debug print
|
2023-08-04 08:42:54 +09:00 |
|
Kohya S
|
c6d52fdea4
|
Add workaround for clip's bug for pooled output
|
2023-08-04 08:38:27 +09:00 |
|
Kohya S
|
0636399c8c
|
add adding v-pred like loss for noise pred
|
2023-07-31 08:23:28 +09:00 |
|
Kohya S
|
f61996b425
|
remove dependency for albumenations
|
2023-07-30 16:29:53 +09:00 |
|
Kohya S
|
496c3f2732
|
arbitrary args for diffusers lr scheduler
|
2023-07-30 14:36:03 +09:00 |
|
Kohya S
|
a296654c1b
|
refactor optimizer selection for bnb
|
2023-07-30 13:43:29 +09:00 |
|
Kohya S
|
e6034b7eb6
|
move releasing cache outside of the loop
|
2023-07-30 13:30:42 +09:00 |
|
青龍聖者@bdsqlsz
|
9ec70252d0
|
Add Paged/ adam8bit/lion8bit for Sdxl bitsandbytes 0.39.1 cuda118 on windows (#623)
* ADD libbitsandbytes.dll for 0.38.1
* Delete libbitsandbytes_cuda116.dll
* Delete cextension.py
* add main.py
* Update requirements.txt for bitsandbytes 0.38.1
* Update README.md for bitsandbytes-windows
* Update README-ja.md for bitsandbytes 0.38.1
* Update main.py for return cuda118
* Update train_util.py for lion8bit
* Update train_README-ja.md for lion8bit
* Update train_util.py for add DAdaptAdan and DAdaptSGD
* Update train_util.py for DAdaptadam
* Update train_network.py for dadapt
* Update train_README-ja.md for DAdapt
* Update train_util.py for DAdapt
* Update train_network.py for DAdaptAdaGrad
* Update train_db.py for DAdapt
* Update fine_tune.py for DAdapt
* Update train_textual_inversion.py for DAdapt
* Update train_textual_inversion_XTI.py for DAdapt
* Revert "Merge branch 'qinglong' into main"
This reverts commit b65c023083, reversing
changes made to f6fda20caf.
* Revert "Update requirements.txt for bitsandbytes 0.38.1"
This reverts commit 83abc60dfa.
* Revert "Delete cextension.py"
This reverts commit 3ba4dfe046.
* Revert "Update README.md for bitsandbytes-windows"
This reverts commit 4642c52086.
* Revert "Update README-ja.md for bitsandbytes 0.38.1"
This reverts commit fa6d7485ac.
* Update train_util.py for DAdaptLion
* Update train_README-zh.md for dadaptlion
* Update train_README-ja.md for DAdaptLion
* add DAdatpt V3
* Alignment
* Update train_util.py for experimental
* Update train_util.py V3
* Update train_util.py
* Update requirements.txt
* Update train_README-zh.md
* Update train_README-ja.md
* Update train_util.py fix
* Update train_util.py
* support Prodigy
* add lower
* Update main.py
* support PagedAdamW8bit/PagedLion8bit
* Update requirements.txt
* update for PageAdamW8bit and PagedLion8bit
* Revert
* revert main
* Update train_util.py
* update for bitsandbytes 0.39.1
* Update requirements.txt
* vram leak fix
---------
Co-authored-by: Pam <pamhome21@gmail.com>
|
2023-07-30 13:15:13 +09:00 |
|
Kohya S
|
4072f723c1
|
Merge branch 'main' into sdxl
|
2023-07-29 14:55:03 +09:00 |
|
Kohya S
|
50b53e183e
|
re-organize import
|
2023-07-23 13:33:02 +09:00 |
|
青龍聖者@bdsqlsz
|
d131bde183
|
Support for bitsandbytes 0.39.1 with Paged Optimizer(AdamW8bit and Lion8bit) (#631)
* ADD libbitsandbytes.dll for 0.38.1
* Delete libbitsandbytes_cuda116.dll
* Delete cextension.py
* add main.py
* Update requirements.txt for bitsandbytes 0.38.1
* Update README.md for bitsandbytes-windows
* Update README-ja.md for bitsandbytes 0.38.1
* Update main.py for return cuda118
* Update train_util.py for lion8bit
* Update train_README-ja.md for lion8bit
* Update train_util.py for add DAdaptAdan and DAdaptSGD
* Update train_util.py for DAdaptadam
* Update train_network.py for dadapt
* Update train_README-ja.md for DAdapt
* Update train_util.py for DAdapt
* Update train_network.py for DAdaptAdaGrad
* Update train_db.py for DAdapt
* Update fine_tune.py for DAdapt
* Update train_textual_inversion.py for DAdapt
* Update train_textual_inversion_XTI.py for DAdapt
* Revert "Merge branch 'qinglong' into main"
This reverts commit b65c023083, reversing
changes made to f6fda20caf.
* Revert "Update requirements.txt for bitsandbytes 0.38.1"
This reverts commit 83abc60dfa.
* Revert "Delete cextension.py"
This reverts commit 3ba4dfe046.
* Revert "Update README.md for bitsandbytes-windows"
This reverts commit 4642c52086.
* Revert "Update README-ja.md for bitsandbytes 0.38.1"
This reverts commit fa6d7485ac.
* Update train_util.py
* Update requirements.txt
* support PagedAdamW8bit/PagedLion8bit
* Update requirements.txt
* update for PageAdamW8bit and PagedLion8bit
* Revert
* revert main
|
2023-07-22 19:45:32 +09:00 |
|
Kohya S
|
73a08c0be0
|
Merge pull request #630 from ddPn08/sdxl
make tracker init_kwargs configurable
|
2023-07-20 22:05:55 +09:00 |
|
Kohya S
|
acf16c063a
|
make to work with PyTorch 1.12
|
2023-07-20 21:41:16 +09:00 |
|
Kohya S
|
225e871819
|
enable full bf16 trainint in train_network
|
2023-07-19 08:41:42 +09:00 |
|
Kohya S
|
6d2d8dfd2f
|
add zero_terminal_snr option
|
2023-07-18 23:17:23 +09:00 |
|
Kohya S
|
0ec7166098
|
make crop top/left same as stabilityai's prep
|
2023-07-18 21:39:36 +09:00 |
|
Kohya S
|
41d195715d
|
fix scheduler steps with gradient accumulation
|
2023-07-16 15:56:29 +09:00 |
|
Kohya S
|
516f64f4d9
|
add caching to disk for text encoder outputs
|
2023-07-16 14:53:47 +09:00 |
|
Kohya S
|
94c151aea3
|
refactor caching latents (flip in same npz, etc)
|
2023-07-15 18:28:33 +09:00 |
|
Kohya S
|
81fa54837f
|
fix sampling in multi GPU training
|
2023-07-15 11:21:14 +09:00 |
|
Kohya S
|
814996b14f
|
fix NaN in sampling image
|
2023-07-11 23:18:35 +09:00 |
|
ddPn08
|
b841dd78fe
|
make tracker init_kwargs configurable
|
2023-07-11 10:21:45 +09:00 |
|
Kohya S
|
b6e328ea8f
|
don't hold latent on memory for finetuning dataset
|
2023-07-10 08:46:15 +09:00 |
|