Kohya S
acf16c063a
make to work with PyTorch 1.12
2023-07-20 21:41:16 +09:00
Kohya S
6d2d8dfd2f
add zero_terminal_snr option
2023-07-18 23:17:23 +09:00
Kohya S
b4a3824ce4
change tokenizer from open clip to transformers
2023-07-13 20:49:26 +09:00
Kohya S
2e67d74df4
add no_half_vae option
2023-07-11 22:19:14 +09:00
ddPn08
b841dd78fe
make tracker init_kwargs configurable
2023-07-11 10:21:45 +09:00
Kohya S
68ca0ea995
Fix to show template type
2023-07-10 22:28:26 +09:00
Kohya S
f54b784d88
support textual inversion training
2023-07-10 22:04:02 +09:00
Kohya S
ea182461d3
add min/max_timestep
2023-07-03 20:44:42 +09:00
Kohya S
5114e8daf1
fix training scripts except controlnet not working
2023-06-22 08:46:53 +09:00
Kohya S
92e50133f8
Merge branch 'original-u-net' into dev
2023-06-17 21:57:08 +09:00
Kohya S
19dfa24abb
Merge branch 'main' into original-u-net
2023-06-16 20:59:34 +09:00
青龍聖者@bdsqlsz
e97d67a681
Support for Prodigy(Dadapt variety for Dylora) ( #585 )
...
* Update train_util.py for DAdaptLion
* Update train_README-zh.md for dadaptlion
* Update train_README-ja.md for DAdaptLion
* add DAdatpt V3
* Alignment
* Update train_util.py for experimental
* Update train_util.py V3
* Update train_README-zh.md
* Update train_README-ja.md
* Update train_util.py fix
* Update train_util.py
* support Prodigy
* add lower
2023-06-15 21:12:53 +09:00
Kohya S
9806b00f74
add arbitrary dataset feature to each script
2023-06-15 20:39:39 +09:00
ykume
9e1683cf2b
support sdpa
2023-06-11 21:26:15 +09:00
ykume
0315611b11
remove workaround for accelerator=0.15, fix XTI
2023-06-11 18:32:14 +09:00
Kohya S
ec2efe52e4
scale v-pred loss like noise pred
2023-06-03 10:52:22 +09:00
ddPn08
23c4e5cb01
update diffusers to 1.16 | train_textual_inversion
2023-06-01 20:39:29 +09:00
Kohya S
968bbd2f47
Merge pull request #480 from yanhuifair/main
...
fix print "saving" and "epoch" in newline
2023-05-11 21:05:37 +09:00
Kohya S
09c719c926
add adaptive noise scale
2023-05-07 18:09:08 +09:00
Fair
b08154dc36
fix print "saving" and "epoch" in newline
2023-05-07 02:51:01 +08:00
Kohya S
2127907dd3
refactor selection and logging for DAdaptation
2023-05-06 18:14:16 +09:00
青龍聖者@bdsqlsz
164a1978de
Support for more Dadaptation ( #455 )
...
* Update train_util.py for add DAdaptAdan and DAdaptSGD
* Update train_util.py for DAdaptadam
* Update train_network.py for dadapt
* Update train_README-ja.md for DAdapt
* Update train_util.py for DAdapt
* Update train_network.py for DAdaptAdaGrad
* Update train_db.py for DAdapt
* Update fine_tune.py for DAdapt
* Update train_textual_inversion.py for DAdapt
* Update train_textual_inversion_XTI.py for DAdapt
2023-05-06 17:30:09 +09:00
ykume
69579668bb
Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev
2023-05-03 11:17:43 +09:00
Kohya S
2e688b7cd3
Merge pull request #471 from pamparamm/multires-noise
...
Multi-Resolution Noise
2023-05-03 11:17:21 +09:00
ykume
2fcbfec178
make transform_DDP more intuitive
2023-05-03 11:07:29 +09:00
Isotr0py
e1143caf38
Fix DDP issues and Support DDP for all training scripts ( #448 )
...
* Fix DDP bugs
* Fix DDP bugs for finetune and db
* refactor model loader
* fix DDP network
* try to fix DDP network in train unet only
* remove unuse DDP import
* refactor DDP transform
* refactor DDP transform
* fix sample images bugs
* change DDP tranform location
* add autocast to train_db
* support DDP in XTI
* Clear DDP import
2023-05-03 10:37:47 +09:00
Pam
b18d099291
Multi-Resolution Noise
2023-05-02 09:42:17 +05:00
Kohya S
74008ce487
add save_every_n_steps option
2023-04-24 23:22:24 +09:00
Plat
27ffd9fe3d
feat: support wandb logging
2023-04-20 01:41:12 +09:00
Kohya S
9fc27403b2
support disk cache: same as #164 , might fix #407
2023-04-13 21:40:34 +09:00
Kohya S
2e9f7b5f91
cache latents to disk in dreambooth method
2023-04-12 23:10:39 +09:00
Kohya S
6a5f87d874
disable weighted captions in TI/XTI training
2023-04-08 21:45:57 +09:00
Kohya S
541539a144
change method name, repo is private in default etc
2023-04-05 23:16:49 +09:00
ddPn08
16ba1cec69
change async uploading to optional
2023-04-02 17:45:26 +09:00
ddPn08
8bfa50e283
small fix
2023-04-02 17:39:23 +09:00
ddPn08
3cc4939dd3
Implement huggingface upload for all scripts
2023-04-02 17:39:22 +09:00
ddPn08
b5ff4e816f
resume from huggingface repository
2023-04-02 17:39:21 +09:00
Kohya S
4f70e5dca6
fix to work with num_workers=0
2023-03-28 19:42:47 +09:00
Kohya S
6732df93e2
Merge branch 'dev' into min-SNR
2023-03-26 17:10:53 +09:00
Kohya S
4f42f759ea
Merge pull request #322 from u-haru/feature/token_warmup
...
タグ数を徐々に増やしながら学習するオプションの追加、persistent_workersに関する軽微なバグ修正
2023-03-26 17:05:59 +09:00
u-haru
a4b34a9c3c
blueprint_args_conflictは不要なため削除、shuffleが毎回行われる不具合修正
2023-03-26 03:26:55 +09:00
u-haru
4dc1124f93
lora以外も対応
2023-03-26 02:19:55 +09:00
AI-Casanova
518a18aeff
(ACTUAL) Min-SNR Weighting Strategy: Fixed SNR calculation to authors implementation
2023-03-23 12:34:49 +00:00
u-haru
dbadc40ec2
persistent_workersを有効にした際にキャプションが変化しなくなるバグ修正
2023-03-23 12:33:03 +09:00
u-haru
447c56bf50
typo修正、stepをglobal_stepに修正、バグ修正
2023-03-23 09:53:14 +09:00
u-haru
a9b26b73e0
implement token warmup
2023-03-23 07:37:14 +09:00
AI-Casanova
64c923230e
Min-SNR Weighting Strategy: Refactored and added to all trainers
2023-03-22 01:27:29 +00:00
Kohya S
2d86f63e15
update steps calc with max_train_epochs
2023-03-21 21:21:12 +09:00
Kohya S
1645698ec0
Merge pull request #306 from robertsmieja/main
...
Extract parser setup to helper function
2023-03-21 21:09:23 +09:00
Kohya S
1816ac3271
add vae_batch_size option for faster caching
2023-03-21 18:15:57 +09:00