ykume
4d0c06e397
support both 0.10.2 and 0.17.0 for Diffusers
2023-06-11 18:54:50 +09:00
ykume
0315611b11
remove workaround for accelerator=0.15, fix XTI
2023-06-11 18:32:14 +09:00
ykume
33a6234b52
Merge branch 'main' into original-u-net
2023-06-11 17:35:20 +09:00
ykume
4b7b3bc04a
fix saved SD dict is invalid for VAE
2023-06-11 17:35:00 +09:00
ykume
035dd3a900
fix mem_eff_attn does not work
2023-06-11 17:08:21 +09:00
ykume
4e25c8f78e
fix to work with Diffusers 0.17.0
2023-06-11 16:57:17 +09:00
ykume
7f6b581ef8
support memory efficient attn (not xformers)
2023-06-11 16:54:41 +09:00
Kohya S
045cd38b6e
fix clip_skip not work in weight capt, sample gen
2023-06-08 22:02:46 +09:00
Kohya S
dccdb8771c
support sample generation in training
2023-06-07 08:12:52 +09:00
Kohya S
c0a7df9ee1
fix eps value, enable xformers, etc.
2023-06-03 21:29:27 +09:00
Kohya S
5db792b10b
initial commit for original U-Net
2023-06-03 19:24:47 +09:00
Kohya S
5bec05e045
move max_norm to lora to avoid crashing in lycoris
2023-06-03 12:42:32 +09:00
Kohya S
ec2efe52e4
scale v-pred loss like noise pred
2023-06-03 10:52:22 +09:00
Kohya S
f4c9276336
add scaling to max norm
2023-06-01 19:46:17 +09:00
AI-Casanova
9c7237157d
Dropout and Max Norm Regularization for LoRA training ( #545 )
...
* Instantiate max_norm
* minor
* Move to end of step
* argparse
* metadata
* phrasing
* Sqrt ratio and logging
* fix logging
* Dropout test
* Dropout Args
* Dropout changed to affect LoRA only
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com >
2023-06-01 14:58:38 +09:00
Kohya S
3a06968332
warn and continue if huggingface uploading failed
2023-05-31 20:48:33 +09:00
Kohya S
990ceddd14
show warning if no caption and no class token
2023-05-30 22:53:50 +09:00
Kohya S
2429ac73b2
Merge pull request #533 from TingTingin/main
...
Added warning on training without captions
2023-05-29 08:37:33 +09:00
TingTingin
db756e9a34
Update train_util.py
...
I removed the sleep since it triggers per subset and if someone had a lot of subsets it would trigger multiple times
2023-05-26 08:08:34 -04:00
青龍聖者@bdsqlsz
5cdf4e34a1
support for dadapaption V3 ( #530 )
...
* Update train_util.py for DAdaptLion
* Update train_README-zh.md for dadaptlion
* Update train_README-ja.md for DAdaptLion
* add DAdatpt V3
* Alignment
* Update train_util.py for experimental
* Update train_util.py V3
* Update train_README-zh.md
* Update train_README-ja.md
* Update train_util.py fix
* Update train_util.py
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com >
2023-05-25 21:52:36 +09:00
TingTingin
061e157191
Update train_util.py
2023-05-23 02:02:39 -04:00
TingTingin
d859a3a925
Update train_util.py
...
fix mistake
2023-05-23 02:00:33 -04:00
TingTingin
5a1a14f9fc
Update train_util.py
...
Added feature to add "." if missing in caption_extension
Added warning on training without captions
2023-05-23 01:57:35 -04:00
Kohya S
02bb8e0ac3
use xformers in VAE in gen script
2023-05-21 12:59:01 +09:00
Kohya S
bc909e8359
Merge pull request #521 from akshaal/fix/save_state
...
fix: don't save state if no --save-state arg given
2023-05-21 08:48:48 +09:00
Evgeny Chukreev
0c942106bf
fix: don't save state if no --save-state arg given
2023-05-18 20:09:06 +02:00
Fair
c0c4d4ddc6
new line with print "generating sample images"
2023-05-17 10:59:06 +08:00
青龍聖者@bdsqlsz
7e5b6154d0
Update train_util.py
2023-05-16 00:09:53 +08:00
Kohya S
714846e1e1
revert perlin_noise
2023-05-15 23:12:11 +09:00
Kohya S
08d85d4013
Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev
2023-05-15 20:58:04 +09:00
Kohya S
0ec7743436
show loading model path
2023-05-15 20:57:53 +09:00
Kohya S
a72d80aa85
Merge pull request #507 from HkingAuditore/main
...
Added support for Perlin noise in Noise Offset
2023-05-15 20:56:46 +09:00
HkingAuditore
dbb9c19669
Merge pull request #1 from kohya-ss/main
...
Update to newest
2023-05-15 11:22:02 +08:00
hkinghuang
bca6a44974
Perlin noise
2023-05-15 11:16:08 +08:00
Linaqruf
8ab5c8cb28
feat: added json support as well
2023-05-14 19:49:54 +07:00
Linaqruf
774c4059fb
feat: added toml support for sample prompt
2023-05-14 19:38:44 +07:00
hkinghuang
5f1d07d62f
init
2023-05-12 21:38:07 +08:00
Kohya S
41dd835a89
fix to work with fp16, crash with some reso
2023-05-12 21:44:07 +09:00
青龍聖者@bdsqlsz
8d562ecf48
fix pynoise code bug ( #489 )
...
* fix pynoise
* Update custom_train_functions.py for default
* Update custom_train_functions.py for note
* Update custom_train_functions.py for default
* Revert "Update custom_train_functions.py for default"
This reverts commit ca79915d73 .
* Update custom_train_functions.py for default
* Revert "Update custom_train_functions.py for default"
This reverts commit 483577e137 .
* default value change
2023-05-11 21:48:51 +09:00
Kohya S
dfc56e9227
Merge branch 'main' into dev
2023-05-11 21:12:33 +09:00
Kohya S
968bbd2f47
Merge pull request #480 from yanhuifair/main
...
fix print "saving" and "epoch" in newline
2023-05-11 21:05:37 +09:00
Kohya S
1b4bdff331
enable i2i with highres fix, add slicing VAE
2023-05-10 23:09:25 +09:00
Kohya S
09c719c926
add adaptive noise scale
2023-05-07 18:09:08 +09:00
Kohya S
e54b6311ef
do not save cuda_rng_state if no cuda closes #390
2023-05-07 10:23:25 +09:00
Fair
b08154dc36
fix print "saving" and "epoch" in newline
2023-05-07 02:51:01 +08:00
Kohya S
165fc43655
fix comment
2023-05-06 18:25:26 +09:00
Kohya S
2127907dd3
refactor selection and logging for DAdaptation
2023-05-06 18:14:16 +09:00
青龍聖者@bdsqlsz
164a1978de
Support for more Dadaptation ( #455 )
...
* Update train_util.py for add DAdaptAdan and DAdaptSGD
* Update train_util.py for DAdaptadam
* Update train_network.py for dadapt
* Update train_README-ja.md for DAdapt
* Update train_util.py for DAdapt
* Update train_network.py for DAdaptAdaGrad
* Update train_db.py for DAdapt
* Update fine_tune.py for DAdapt
* Update train_textual_inversion.py for DAdapt
* Update train_textual_inversion_XTI.py for DAdapt
2023-05-06 17:30:09 +09:00
Kohya S
60bbe64489
raise error when both noise offset and multires
2023-05-03 20:58:12 +09:00
ykume
758a1e7f66
Revert unet config, add option to convert script
2023-05-03 16:05:15 +09:00