mirror of
https://github.com/kohya-ss/sd-scripts.git
synced 2026-04-09 06:45:09 +00:00
add .toml example for lllite doc
This commit is contained in:
@@ -45,6 +45,35 @@ conditioning image embeddingの次元数は、サンプルのCannyでは32を指
|
|||||||
|
|
||||||
(サンプルのCannyは恐らくかなり難しいと思われます。depthなどでは半分程度にしてもいいかもしれません。)
|
(サンプルのCannyは恐らくかなり難しいと思われます。depthなどでは半分程度にしてもいいかもしれません。)
|
||||||
|
|
||||||
|
以下は .toml の設定例です。
|
||||||
|
|
||||||
|
```toml
|
||||||
|
pretrained_model_name_or_path = "/path/to/model_trained_on.safetensors"
|
||||||
|
max_train_epochs = 12
|
||||||
|
max_data_loader_n_workers = 4
|
||||||
|
persistent_data_loader_workers = true
|
||||||
|
seed = 42
|
||||||
|
gradient_checkpointing = true
|
||||||
|
mixed_precision = "bf16"
|
||||||
|
save_precision = "bf16"
|
||||||
|
full_bf16 = true
|
||||||
|
optimizer_type = "adamw8bit"
|
||||||
|
learning_rate = 2e-4
|
||||||
|
xformers = true
|
||||||
|
output_dir = "/path/to/output/dir"
|
||||||
|
output_name = "output_name"
|
||||||
|
save_every_n_epochs = 1
|
||||||
|
save_model_as = "safetensors"
|
||||||
|
vae_batch_size = 4
|
||||||
|
cache_latents = true
|
||||||
|
cache_latents_to_disk = true
|
||||||
|
cache_text_encoder_outputs = true
|
||||||
|
cache_text_encoder_outputs_to_disk = true
|
||||||
|
network_dim = 64
|
||||||
|
cond_emb_dim = 32
|
||||||
|
dataset_config = "/path/to/dataset.toml"
|
||||||
|
```
|
||||||
|
|
||||||
### 推論
|
### 推論
|
||||||
|
|
||||||
スクリプトで生成する場合は、`sdxl_gen_img.py` を実行してください。`--control_net_lllite_models` でLLLiteのモデルファイルを指定できます。次元数はモデルファイルから自動取得します。
|
スクリプトで生成する場合は、`sdxl_gen_img.py` を実行してください。`--control_net_lllite_models` でLLLiteのモデルファイルを指定できます。次元数はモデルファイルから自動取得します。
|
||||||
|
|||||||
@@ -51,6 +51,35 @@ For the sample Canny, the dimension of the conditioning image embedding is 32. T
|
|||||||
|
|
||||||
(The sample Canny is probably quite difficult. It may be better to reduce it to about half for depth, etc.)
|
(The sample Canny is probably quite difficult. It may be better to reduce it to about half for depth, etc.)
|
||||||
|
|
||||||
|
The following is an example of a .toml configuration.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
pretrained_model_name_or_path = "/path/to/model_trained_on.safetensors"
|
||||||
|
max_train_epochs = 12
|
||||||
|
max_data_loader_n_workers = 4
|
||||||
|
persistent_data_loader_workers = true
|
||||||
|
seed = 42
|
||||||
|
gradient_checkpointing = true
|
||||||
|
mixed_precision = "bf16"
|
||||||
|
save_precision = "bf16"
|
||||||
|
full_bf16 = true
|
||||||
|
optimizer_type = "adamw8bit"
|
||||||
|
learning_rate = 2e-4
|
||||||
|
xformers = true
|
||||||
|
output_dir = "/path/to/output/dir"
|
||||||
|
output_name = "output_name"
|
||||||
|
save_every_n_epochs = 1
|
||||||
|
save_model_as = "safetensors"
|
||||||
|
vae_batch_size = 4
|
||||||
|
cache_latents = true
|
||||||
|
cache_latents_to_disk = true
|
||||||
|
cache_text_encoder_outputs = true
|
||||||
|
cache_text_encoder_outputs_to_disk = true
|
||||||
|
network_dim = 64
|
||||||
|
cond_emb_dim = 32
|
||||||
|
dataset_config = "/path/to/dataset.toml"
|
||||||
|
```
|
||||||
|
|
||||||
### Inference
|
### Inference
|
||||||
|
|
||||||
If you want to generate images with a script, run `sdxl_gen_img.py`. You can specify the LLLite model file with `--control_net_lllite_models`. The dimension is automatically obtained from the model file.
|
If you want to generate images with a script, run `sdxl_gen_img.py`. You can specify the LLLite model file with `--control_net_lllite_models`. The dimension is automatically obtained from the model file.
|
||||||
|
|||||||
Reference in New Issue
Block a user