Kohya S
86279c8855
Merge branch 'dev' into DyLoRA-xl
2024-02-24 19:18:36 +09:00
BootsofLagrangian
4d5186d1cf
refactored codes, some function moved into train_utils.py
2024-02-22 16:20:53 +09:00
tamlog06
a6f1ed2e14
fix dylora create_modules error
2024-02-18 13:20:47 +00:00
Kohya S
d1fb480887
format by black
2024-02-18 09:13:24 +09:00
Kohya S
75e4a951d0
update readme
2024-02-17 12:04:12 +09:00
Kohya S
42f3318e17
Merge pull request #1116 from kohya-ss/dev_device_support
...
Dev device support
2024-02-17 11:58:02 +09:00
Kohya S
baa0e97ced
Merge branch 'dev' into dev_device_support
2024-02-17 11:54:07 +09:00
Kohya S
71ebcc5e25
update readme and gradual latent doc
2024-02-12 14:52:19 +09:00
Kohya S
93bed60762
fix to work --console_log_xxx options
2024-02-12 14:49:29 +09:00
Kohya S
41d32c0be4
Merge pull request #1117 from kohya-ss/gradual_latent_hires_fix
...
Gradual latent hires fix
2024-02-12 14:21:27 +09:00
Kohya S
cbe9c5dc06
supprt deep shink with regional lora, add prompter module
2024-02-12 14:17:27 +09:00
Kohya S
d3745db764
add args for logging
2024-02-12 13:15:21 +09:00
Kohya S
358ca205a3
Merge branch 'dev' into dev_device_support
2024-02-12 13:01:54 +09:00
Kohya S
c748719115
fix indent
2024-02-12 12:59:45 +09:00
Kohya S
98f42d3a0b
Merge branch 'dev' into gradual_latent_hires_fix
2024-02-12 12:59:25 +09:00
Kohya S
35c6053de3
Merge pull request #1104 from kohya-ss/dev_improve_log
...
replace print with logger
2024-02-12 11:33:32 +09:00
Kohya S
20ae603221
Merge branch 'dev' into gradual_latent_hires_fix
2024-02-12 11:26:36 +09:00
Kohya S
672851e805
Merge branch 'dev' into dev_improve_log
2024-02-12 11:24:33 +09:00
Kohya S
e579648ce9
fix help for highvram arg
2024-02-12 11:12:41 +09:00
Kohya S
e24d9606a2
add clean_memory_on_device and use it from training
2024-02-12 11:10:52 +09:00
Kohya S
75ecb047e2
Merge branch 'dev' into dev_device_support
2024-02-11 19:51:28 +09:00
Kohya S
f897d55781
Merge pull request #1113 from kohya-ss/dev_multi_gpu_sample_gen
...
Dev multi gpu sample gen
2024-02-11 19:49:08 +09:00
Kohya S
7202596393
log to print tag frequencies
2024-02-10 09:59:12 +09:00
BootsofLagrangian
03f0816f86
the reason not working grad accum steps found. it was becasue of my accelerate settings
2024-02-09 17:47:49 +09:00
Kohya S
5d9e2873f6
make rich to output to stderr instead of stdout
2024-02-08 21:38:02 +09:00
Kohya S
055f02e1e1
add logging args for training scripts
2024-02-08 21:16:42 +09:00
Kohya S
9b8ea12d34
update log initialization without rich
2024-02-08 21:06:39 +09:00
Kohya S
74fe0453b2
add comment for get_preferred_device
2024-02-08 20:58:54 +09:00
BootsofLagrangian
a98fecaeb1
forgot setting mixed_precision for deepspeed. sorry
2024-02-07 17:19:46 +09:00
BootsofLagrangian
2445a5b74e
remove test requirements
2024-02-07 16:48:18 +09:00
BootsofLagrangian
62556619bd
fix full_fp16 compatible and train_step
2024-02-07 16:42:05 +09:00
BootsofLagrangian
7d2a9268b9
apply offloading method runable for all trainer
2024-02-05 22:42:06 +09:00
BootsofLagrangian
3970bf4080
maybe fix branch to run offloading
2024-02-05 22:40:43 +09:00
BootsofLagrangian
4295f91dcd
fix all trainer about vae
2024-02-05 20:19:56 +09:00
BootsofLagrangian
2824312d5e
fix vae type error during training sdxl
2024-02-05 20:13:28 +09:00
BootsofLagrangian
64873c1b43
fix offload_optimizer_device typo
2024-02-05 17:11:50 +09:00
Kohya S
efd3b58973
Add logging arguments and update logging setup
2024-02-04 20:44:10 +09:00
Kohya S
6279b33736
fallback to basic logging if rich is not installed
2024-02-04 18:28:54 +09:00
Yuta Hayashibe
5f6bf29e52
Replace print with logger if they are logs ( #905 )
...
* Add get_my_logger()
* Use logger instead of print
* Fix log level
* Removed line-breaks for readability
* Use setup_logging()
* Add rich to requirements.txt
* Make simple
* Use logger instead of print
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com >
2024-02-04 18:14:34 +09:00
Kohya S
e793d7780d
reduce peak VRAM in sample gen
2024-02-04 17:31:01 +09:00
mgz
1492bcbfa2
add --new_conv_rank option
...
update script to also take a separate conv rank value
2024-02-03 23:18:55 -06:00
mgz
bf2de5620c
fix formatting in resize_lora.py
2024-02-03 20:09:37 -06:00
BootsofLagrangian
dfe08f395f
support deepspeed
2024-02-04 03:12:42 +09:00
Kohya S
6269682c56
unificaition of gen scripts for SD and SDXL, work in progress
2024-02-03 23:33:48 +09:00
Kohya S
2f9a344297
fix typo
2024-02-03 23:26:57 +09:00
Kohya S
11aced3500
simplify multi-GPU sample generation
2024-02-03 22:25:29 +09:00
DKnight54
1567ce1e17
Enable distributed sample image generation on multi-GPU enviroment ( #1061 )
...
* Update train_util.py
Modifying to attempt enable multi GPU inference
* Update train_util.py
additional VRAM checking, refactor check_vram_usage to return string for use with accelerator.print
* Update train_network.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
remove sample image debug outputs
* Update train_util.py
* Update train_util.py
* Update train_network.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_network.py
* Update train_util.py
* Update train_network.py
* Update train_network.py
* Update train_network.py
* Cleanup of debugging outputs
* adopt more elegant coding
Co-authored-by: Aarni Koskela <akx@iki.fi >
* Update train_util.py
Fix leftover debugging code
attempt to refactor inference into separate function
* refactor in function generate_per_device_prompt_list() generation of distributed prompt list
* Clean up missing variables
* fix syntax error
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* true random sample image generation
update code to reinitialize random seed to true random if seed was set
* true random sample image generation
* simplify per process prompt
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_util.py
* Update train_network.py
* Update train_network.py
* Update train_network.py
---------
Co-authored-by: Aarni Koskela <akx@iki.fi >
2024-02-03 21:46:31 +09:00
Kohya S
5cca1fdc40
add highvram option and do not clear cache in caching latents
2024-02-01 21:55:55 +09:00
Kohya S
9f0f0d573d
Merge pull request #1092 from Disty0/dev_device_support
...
Fix IPEX support and add XPU device to device_utils
2024-02-01 20:41:21 +09:00
dependabot[bot]
716a92cbed
Bump crate-ci/typos from 1.16.26 to 1.17.2
...
Bumps [crate-ci/typos](https://github.com/crate-ci/typos ) from 1.16.26 to 1.17.2.
- [Release notes](https://github.com/crate-ci/typos/releases )
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md )
- [Commits](https://github.com/crate-ci/typos/compare/v1.16.26...v1.17.2 )
---
updated-dependencies:
- dependency-name: crate-ci/typos
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com >
2024-02-01 01:57:52 +00:00