From c84a163b3231e97cea77292551fa8b3967d2594a Mon Sep 17 00:00:00 2001 From: Kohya S Date: Mon, 21 Jul 2025 13:40:03 +0900 Subject: [PATCH] docs: update README for documentation --- README.md | 9 +++++++++ docs/lumina_train_network.md | 2 +- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index b6365644..3ef16593 100644 --- a/README.md +++ b/README.md @@ -19,7 +19,16 @@ If you are using DeepSpeed, please install DeepSpeed with `pip install deepspeed Jul 21, 2025: - Support for [Lumina-Image 2.0](https://github.com/Alpha-VLLM/Lumina-Image-2.0) has been added in PR [#1927](https://github.com/kohya-ss/sd-scripts/pull/1927) and [#2138](https://github.com/kohya-ss/sd-scripts/pull/2138). Special thanks to sdbds and RockerBOO for their contributions. - Please refer to the [Lumina-Image 2.0 documentation](./docs/lumina_train_network.md) for more details. +- We have started adding comprehensive training-related documentation to [docs](./docs). These documents are being created with the help of generative AI and will be updated over time. While there are still many gaps at this stage, we plan to improve them gradually. + Currently, the following documents are available: + - train_network.md + - sdxl_train_network.md + - sdxl_train_network_advanced.md + - flux_train_network.md + - sd3_train_network.md + - lumina_train_network.md + Jul 10, 2025: - [AI Coding Agents](#for-developers-using-ai-coding-agents) section is added to the README. This section provides instructions for developers using AI coding agents like Claude and Gemini to understand the project context and coding standards. diff --git a/docs/lumina_train_network.md b/docs/lumina_train_network.md index 5f2fda17..3f0548d9 100644 --- a/docs/lumina_train_network.md +++ b/docs/lumina_train_network.md @@ -6,7 +6,7 @@ This document explains how to train LoRA (Low-Rank Adaptation) models for Lumina `lumina_train_network.py` trains additional networks such as LoRA for Lumina Image 2.0 models. Lumina Image 2.0 adopts a Next-DiT (Next-generation Diffusion Transformer) architecture, which differs from previous Stable Diffusion models. It uses a single text encoder (Gemma2) and a dedicated AutoEncoder (AE). -This guide assumes you already understand the basics of LoRA training. For common usage and options, see the train_network.py guide (to be documented). Some parameters are similar to those in [`sd3_train_network.py`](sd3_train_network.md) and [`flux_train_network.py`](flux_train_network.md). +This guide assumes you already understand the basics of LoRA training. For common usage and options, see [the train_network.py guide](./train_network.md). Some parameters are similar to those in [`sd3_train_network.py`](sd3_train_network.md) and [`flux_train_network.py`](flux_train_network.md). **Prerequisites:**