From 288efddf2f6b8392dd74a0494b631c4d3b28304a Mon Sep 17 00:00:00 2001 From: Kohya S <52813779+kohya-ss@users.noreply.github.com> Date: Thu, 6 Jul 2023 07:43:30 +0900 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 53685c3f..7df0ec53 100644 --- a/README.md +++ b/README.md @@ -45,7 +45,7 @@ Summary of the feature: - Use `--cache_text_encoder_outputs` option and caching latents. - Use Adafactor optimizer. RMSprop 8bit or Adagrad 8bit may work. AdamW 8bit doesn't seem to work. - The LoRA training can be done with 12GB GPU memory. -- `--train_unet_only` option is highly recommended for SDXL LoRA. Because SDXL has two text encoders, the result of the training will be unexpected. +- `--network_train_unet_only` option is highly recommended for SDXL LoRA. Because SDXL has two text encoders, the result of the training will be unexpected. - PyTorch 2 seems to use slightly less GPU memory than PyTorch 1. Example of the optimizer settings for Adafactor with the fixed learning rate: