Kohya S.
|
1e30aa83b4
|
Merge pull request #1541 from sdbds/flux_shift
Add Flux_Shift for solving the problem of multi-resolution training blurry
|
2024-09-01 19:00:16 +09:00 |
|
Kohya S
|
92e7600cc2
|
Move freeze_blocks to sd3_train because it's only for sd3
|
2024-09-01 18:57:07 +09:00 |
|
青龍聖者@bdsqlsz
|
ef510b3cb9
|
Sd3 freeze x_block (#1417)
* Update sd3_train.py
* add freeze block lr
* Update train_util.py
* update
|
2024-09-01 18:41:01 +09:00 |
|
Kohya S.
|
928e0fc096
|
Merge pull request #1529 from Akegarasu/sd3
fix: text_encoder_conds referenced before assignment
|
2024-09-01 18:29:27 +09:00 |
|
dependabot[bot]
|
1bcf8d600b
|
Bump crate-ci/typos from 1.19.0 to 1.24.3
Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.19.0 to 1.24.3.
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.19.0...v1.24.3)
---
updated-dependencies:
- dependency-name: crate-ci/typos
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2024-09-01 01:33:04 +00:00 |
|
Kohya S.
|
f8f5b16958
|
Merge pull request #1540 from kohya-ss/dependabot/pip/opencv-python-4.8.1.78
Bump opencv-python from 4.7.0.68 to 4.8.1.78
|
2024-08-31 21:37:07 +09:00 |
|
Kohya S.
|
826ab5ce2e
|
Merge pull request #1532 from nandometzger/main
Update train_util.py, bug fix
|
2024-08-31 21:36:33 +09:00 |
|
sdbds
|
25c9040f4f
|
Update flux_train_utils.py
|
2024-08-31 19:53:59 +08:00 |
|
dependabot[bot]
|
3a6154b7b0
|
Bump opencv-python from 4.7.0.68 to 4.8.1.78
Bumps [opencv-python](https://github.com/opencv/opencv-python) from 4.7.0.68 to 4.8.1.78.
- [Release notes](https://github.com/opencv/opencv-python/releases)
- [Commits](https://github.com/opencv/opencv-python/commits)
---
updated-dependencies:
- dependency-name: opencv-python
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2024-08-31 06:21:16 +00:00 |
|
Nando Metzger
|
2a3aefb4e4
|
Update train_util.py, bug fix
|
2024-08-30 08:15:05 +02:00 |
|
Akegarasu
|
35882f8d5b
|
fix
|
2024-08-29 23:03:43 +08:00 |
|
Akegarasu
|
34f2315047
|
fix: text_encoder_conds referenced before assignment
|
2024-08-29 22:33:37 +08:00 |
|
Kohya S
|
8fdfd8c857
|
Update safetensors to version 0.4.4 in requirements.txt #1524
|
2024-08-29 22:26:29 +09:00 |
|
Kohya S
|
8ecf0fc4bf
|
Refactor code to ensure args.guidance_scale is always a float #1525
|
2024-08-29 22:10:57 +09:00 |
|
Kohya S.
|
930d709e3d
|
Merge pull request #1525 from Akegarasu/sd3
make guidance_scale keep float in args
|
2024-08-29 22:08:57 +09:00 |
|
Kohya S.
|
daa6ad5165
|
Update README.md
|
2024-08-29 21:25:30 +09:00 |
|
Kohya S
|
a0cfb0894c
|
Cleaned up README
|
2024-08-29 21:20:33 +09:00 |
|
Akegarasu
|
6c0e8a5a17
|
make guidance_scale keep float in args
|
2024-08-29 14:50:29 +08:00 |
|
Kohya S
|
a61cf73a5c
|
update readme
|
2024-08-27 21:44:10 +09:00 |
|
Kohya S
|
3be712e3e0
|
feat: Update direct loading fp8 ckpt for LoRA training
|
2024-08-27 21:40:02 +09:00 |
|
Kohya S
|
0087a46e14
|
FLUX.1 LoRA supports CLIP-L
|
2024-08-27 19:59:40 +09:00 |
|
Kohya S
|
72287d39c7
|
feat: Add shift option to --timestep_sampling in FLUX.1 fine-tuning and LoRA training
|
2024-08-25 16:01:24 +09:00 |
|
Kohya S
|
ea9242653c
|
Merge branch 'dev' into sd3
|
2024-08-24 21:24:44 +09:00 |
|
Kohya S
|
d5c076cf90
|
update readme
|
2024-08-24 21:21:39 +09:00 |
|
Kohya S.
|
4ca29edbff
|
Merge pull request #1505 from liesened/patch-2
Add v-pred support for SDXL train
|
2024-08-24 21:16:53 +09:00 |
|
Kohya S
|
5639c2adc0
|
fix typo
|
2024-08-24 16:37:49 +09:00 |
|
Kohya S
|
cf689e7aa6
|
feat: Add option to split projection layers and apply LoRA
|
2024-08-24 16:35:43 +09:00 |
|
Kohya S
|
2e89cd2cc6
|
Fix issue with attention mask not being applied in single blocks
|
2024-08-24 12:39:54 +09:00 |
|
liesen
|
1e8108fec9
|
Handle args.v_parameterization properly for MinSNR and changed prediction target
|
2024-08-24 01:38:17 +03:00 |
|
Kohya S
|
99744af53a
|
Merge branch 'dev' into sd3
|
2024-08-22 21:34:24 +09:00 |
|
Kohya S
|
afb971f9c3
|
fix SD1.5 LoRA extraction #1490
|
2024-08-22 21:33:15 +09:00 |
|
Kohya S
|
bf9f798985
|
chore: fix typos, remove debug print
|
2024-08-22 19:59:38 +09:00 |
|
Kohya S
|
b0a980844a
|
added a script to extract LoRA
|
2024-08-22 19:57:29 +09:00 |
|
Kohya S
|
2d8fa3387a
|
Fix to remove zero pad for t5xxl output
|
2024-08-22 19:56:27 +09:00 |
|
Kohya S
|
a4d27a232b
|
Fix --debug_dataset to work.
|
2024-08-22 19:55:31 +09:00 |
|
kohya-ss
|
98c91a7625
|
Fix bug in FLUX multi GPU training
|
2024-08-22 12:37:41 +09:00 |
|
Kohya S
|
e1cd19c0c0
|
add stochastic rounding, fix single block
|
2024-08-21 21:04:10 +09:00 |
|
Kohya S
|
2b07a92c8d
|
Fix error in applying mask in Attention and add LoRA converter script
|
2024-08-21 12:30:23 +09:00 |
|
Kohya S
|
e17c42cb0d
|
Add BFL/Diffusers LoRA converter #1467 #1458 #1483
|
2024-08-21 12:28:45 +09:00 |
|
Kohya S
|
7e459c00b2
|
Update T5 attention mask handling in FLUX
|
2024-08-21 08:02:33 +09:00 |
|
Kohya S
|
6ab48b09d8
|
feat: Support multi-resolution training with caching latents to disk
|
2024-08-20 21:39:43 +09:00 |
|
Kohya S.
|
388b3b4b74
|
Merge pull request #1482 from kohya-ss/flux-merge-lora
Flux merge lora
|
2024-08-20 19:34:57 +09:00 |
|
Kohya S
|
dbed5126bd
|
chore: formatting
|
2024-08-20 19:33:47 +09:00 |
|
Kohya S
|
9381332020
|
revert merge function add add option to use new func
|
2024-08-20 19:32:26 +09:00 |
|
Kohya S
|
6f6faf9b5a
|
fix to work with ai-toolkit LoRA
|
2024-08-20 19:16:25 +09:00 |
|
Kohya S.
|
92b1f6d968
|
Merge pull request #1469 from exveria1015/sd3
Flux の LoRA マージ機能を修正
|
2024-08-20 19:06:06 +09:00 |
|
Kohya S
|
c62c95e862
|
update about multi-resolution training in FLUX.1
|
2024-08-20 08:21:01 +09:00 |
|
Kohya S
|
9e72be0a13
|
Fix debug_dataset to work
|
2024-08-20 08:19:00 +09:00 |
|
Kohya S
|
486fe8f70a
|
feat: reduce memory usage and add memory efficient option for model saving
|
2024-08-19 22:30:24 +09:00 |
|
Kohya S
|
6e72a799c8
|
reduce peak VRAM usage by excluding some blocks to cuda
|
2024-08-19 21:55:28 +09:00 |
|