* add config file schema * change config file specification * refactor config utility * unify batch_size to train_batch_size * fix indent size * use batch_size instead of train_batch_size * make cache_latents configurable on subset * rename options * bucket_repo_range * shuffle_keep_tokens * update readme * revert to min_bucket_reso & max_bucket_reso * use subset structure in dataset * format import lines * split mode specific options * use only valid subset * change valid subsets name * manage multiple datasets by dataset group * update config file sanitizer * prune redundant validation * add comments * update type annotation * rename json_file_name to metadata_file * ignore when image dir is invalid * fix tag shuffle and dropout * ignore duplicated subset * add method to check latent cachability * fix format * fix bug * update caption dropout default values * update annotation * fix bug * add option to enable bucket shuffle across dataset * update blueprint generate function * use blueprint generator for dataset initialization * delete duplicated function * update config readme * delete debug print * print dataset and subset info as info * enable bucket_shuffle_across_dataset option * update config readme for clarification * compensate quotes for string option example * fix bug of bad usage of join * conserve trained metadata backward compatibility * enable shuffle in data loader by default * delete resolved TODO * add comment for image data handling * fix reference bug * fix undefined variable bug * prevent raise overwriting * assert image_dir and metadata_file validity * add debug message for ignoring subset * fix inconsistent import statement * loosen too strict validation on float value * sanitize argument parser separately * make image_dir optional for fine tuning dataset * fix import * fix trailing characters in print * parse flexible dataset config deterministically * use relative import * print supplementary message for parsing error * add note about different methods * add note of benefit of separate dataset * add error example * add note for english readme plan --------- Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com>
This repository contains training, generation and utility scripts for Stable Diffusion.
Change History is moved to the bottom of the page. 更新履歴はページ末尾に移しました。
For easier use (GUI and PowerShell scripts etc...), please visit the repository maintained by bmaltais. Thanks to @bmaltais!
This repository contains the scripts for:
- DreamBooth training, including U-Net and Text Encoder
- Fine-tuning (native training), including U-Net and Text Encoder
- LoRA training
- Texutl Inversion training
- Image generation
- Model conversion (supports 1.x and 2.x, Stable Diffision ckpt/safetensors and Diffusers)
Stable Diffusion web UI now seems to support LoRA trained by sd-scripts. (SD 1.x based only) Thank you for great work!!!
About requirements.txt
These files do not contain requirements for PyTorch. Because the versions of them depend on your environment. Please install PyTorch at first (see installation guide below.)
The scripts are tested with PyTorch 1.12.1 and 1.13.0, Diffusers 0.10.2.
Links to how-to-use documents
All documents are in Japanese currently, and CUI based.
- DreamBooth training guide
- Step by Step fine-tuning guide: Including BLIP captioning and tagging by DeepDanbooru or WD14 tagger
- training LoRA
- training Textual Inversion
- note.com Image generation
- note.com Model conversion
Windows Required Dependencies
Python 3.10.6 and Git:
- Python 3.10.6: https://www.python.org/ftp/python/3.10.6/python-3.10.6-amd64.exe
- git: https://git-scm.com/download/win
Give unrestricted script access to powershell so venv can work:
- Open an administrator powershell window
- Type
Set-ExecutionPolicy Unrestrictedand answer A - Close admin powershell window
Windows Installation
Open a regular Powershell terminal and type the following inside:
git clone https://github.com/kohya-ss/sd-scripts.git
cd sd-scripts
python -m venv venv
.\venv\Scripts\activate
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install --upgrade -r requirements.txt
pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
cp .\bitsandbytes_windows\*.dll .\venv\Lib\site-packages\bitsandbytes\
cp .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
cp .\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py
accelerate config
update: python -m venv venv is seemed to be safer than python -m venv --system-site-packages venv (some user have packages in global python).
Answers to accelerate config:
- This machine
- No distributed training
- NO
- NO
- NO
- all
- fp16
note: Some user reports ValueError: fp16 mixed precision requires a GPU is occurred in training. In this case, answer 0 for the 6th question:
What GPU(s) (by id) should be used for training on this machine as a comma-separated list? [all]:
(Single GPU with id 0 will be used.)
about PyTorch and xformers
Other versions of PyTorch and xformers seem to have problems with training. If there is no other reason, please install the specified version.
Upgrade
When a new release comes out you can upgrade your repo with the following command:
cd sd-scripts
git pull
.\venv\Scripts\activate
pip install --use-pep517 --upgrade -r requirements.txt
Once the commands have completed successfully you should be ready to use the new version.
Credits
The implementation for LoRA is based on cloneofsimo's repo. Thank you for great work!!!
License
The majority of scripts is licensed under ASL 2.0 (including codes from Diffusers, cloneofsimo's), however portions of the project are available under separate license terms:
Memory Efficient Attention Pytorch: MIT
bitsandbytes: MIT
BLIP: BSD-3-Clause
Change History
-
23 Feb. 2023, 2023/2/23:
- Fix instability training issue in
train_network.py.fp16training is probably not affected by this issue.- Training with
floatfor SD2.x models will work now. Also training withbf16might be improved. - This issue seems to have occurred in PR#190.
- Add some metadata to LoRA model. Thanks to space-nuko!
- Raise an error if optimizer options conflict (e.g.
--optimizer_typeand--use_8bit_adam.) - Support ControlNet in
gen_img_diffusers.py(no documentation yet.) train_network.pyで学習が不安定になる不具合を修正しました。fp16精度での学習には恐らくこの問題は影響しません。float精度での SD2.x モデルの学習が正しく動作するようになりました。またbf16精度の学習も改善する可能性があります。- この問題は PR#190 から起きていたようです。
- いくつかのメタデータを LoRA モデルに追加しました。 space-nuko 氏に感謝します。
- オプティマイザ関係のオプションが矛盾していた場合、エラーとするように修正しました(例:
--optimizer_typeと--use_8bit_adam)。 gen_img_diffusers.pyで ControlNet をサポートしました(ドキュメントはのちほど追加します)。
- Fix instability training issue in
-
22 Feb. 2023, 2023/2/22:
- Refactor optmizer options. Thanks to mgz-dev!
- Add
--optimizer_typeoption for each training script. Please see help. Japanese documentation is here. --use_8bit_adamand--use_lion_optimizeroptions also work, but override above option.
- Add
- Add SGDNesterov and its 8bit.
- Add D-Adaptation optimizer. Thanks to BootsofLagrangian and all!
- Please install D-Adaptation optimizer with
pip install dadaptation(it is not in requirements.txt currently.) - Please see https://github.com/kohya-ss/sd-scripts/issues/181 for details.
- Please install D-Adaptation optimizer with
- Add AdaFactor optimizer. Thanks to Toshiaki!
- Extra lr scheduler settings (num_cycles etc.) are working in training scripts other than
train_network.py. - Add
--max_grad_normoption for each training script for gradient clipping.0.0disables clipping. - Symbolic link can be loaded in each training script. Thanks to TkskKurumi!
- オプティマイザ関連のオプションを見直しました。mgz-dev氏に感謝します。
--optimizer_typeを各学習スクリプトに追加しました。ドキュメントはこちら。--use_8bit_adamと--use_lion_optimizerのオプションは依然として動作しますがoptimizer_typeを上書きしますのでご注意ください。
- SGDNesterov オプティマイザおよびその8bit版を追加しました。
- D-Adaptation オプティマイザを追加しました。BootsofLagrangian 氏および諸氏に感謝します。
pip install dadaptationコマンドで別途インストールが必要です(現時点ではrequirements.txtに含まれておりません)。- こちらのissueもあわせてご覧ください。 https://github.com/kohya-ss/sd-scripts/issues/181
- AdaFactor オプティマイザを追加しました。Toshiaki氏に感謝します。
- 追加のスケジューラ設定(num_cycles等)が
train_network.py以外の学習スクリプトでも使えるようになりました。 - 勾配クリップ時の最大normを指定する
--max_grad_normオプションを追加しました。0.0を指定するとクリップしなくなります。 - 各学習スクリプトでシンボリックリンクが読み込めるようになりました。TkskKurumi氏に感謝します。
- Refactor optmizer options. Thanks to mgz-dev!
Please read Releases for recent updates. 最近の更新情報は Release をご覧ください。