Kohya S
048e7cd428
add lion optimizer support
2023-02-19 15:26:14 +09:00
Isotr0py
78d1fb5ce6
Add '--lowram' argument
2023-02-17 12:08:54 +08:00
Kohya S
d01d953262
Merge pull request #196 from space-nuko/add-noise-offset-metadata
...
Add noise offset to metadata
2023-02-16 22:01:02 +09:00
Kohya S
914d1505df
Merge pull request #189 from shirayu/improve_loss_track
...
Show the moving average loss
2023-02-16 22:00:26 +09:00
space-nuko
496c8cdc09
Add noise-offset to metadata
2023-02-16 02:56:39 -08:00
Kohya S
43c0a69843
Add noise_offset
2023-02-14 21:15:48 +09:00
Yuta Hayashibe
8aed5125de
Removed call of sum()
2023-02-14 21:11:30 +09:00
Kohya S
e0f007f2a9
Fix import
2023-02-14 20:55:38 +09:00
Kohya S
8f1e930bf4
Merge pull request #187 from space-nuko/add-commit-hash
...
Add commit hash to metadata
2023-02-14 19:52:30 +09:00
Yuta Hayashibe
21f5b618c3
Show the moving average loss
2023-02-14 19:46:27 +09:00
space-nuko
5471b0deb0
Add commit hash to metadata
2023-02-13 02:58:06 -08:00
Isotr0py
2b1a3080e7
Add type checking
2023-02-12 15:32:38 +08:00
Isotr0py
92a1af8024
Merge branch 'kohya-ss:main' into support-multi-gpu
2023-02-12 15:06:46 +08:00
Kohya S
3a72e6f003
add tag dropout
2023-02-09 21:35:27 +09:00
Isotr0py
b8ad17902f
fix get_hidden_states expected scalar Error again
2023-02-08 23:09:59 +08:00
Isotr0py
9a9ac79edf
correct wrong inserted code for noise_pred fix
2023-02-08 22:30:20 +08:00
Isotr0py
6473aa1dd7
fix Input type error in noise_pred when using DDP
2023-02-08 21:32:21 +08:00
Isotr0py
b599adc938
fix Input type error when using DDP
2023-02-08 20:14:20 +08:00
Isotr0py
fb312acb7f
support DistributedDataParallel
2023-02-08 20:12:43 +08:00
Isotr0py
938bd71844
lower ram usage
2023-02-08 18:31:27 +08:00
Kohya S
e42b2f7aa9
conditional caption dropout (in progress)
2023-02-07 22:28:56 +09:00
forestsource
7db98baa86
Add dropout options
2023-02-07 00:01:30 +09:00
Kohya S
ea2dfd09ef
update bucketing features
2023-02-05 21:37:46 +09:00
Kohya S
9fd7fb813d
Merge branch 'dev' into main
2023-02-04 18:16:03 +09:00
Kohya S
58a809eaff
Add comment
2023-02-03 21:04:03 +09:00
hitomi
26a81d075c
add --persistent_data_loader_workers option
2023-02-01 16:02:15 +08:00
Kohya S
74eba06d13
Merge pull request #104 from space-nuko/caption-frequency-metadata
...
Add tag frequency metadata
2023-01-31 20:56:15 +09:00
Kohya S
7817e95a86
change name of arg
2023-01-29 20:28:24 +09:00
Kohya S
ed2e431950
Merge branch 'main' into caption-frequency-metadata
2023-01-29 17:50:23 +09:00
michaelgzhang
0fef7b4684
monkeypatch updated get_scheduler for diffusers
...
enables use of "num_cycles" and "power" for cosine_with_restarts and polynomial learning rate schedulers
2023-01-27 16:42:11 -06:00
Kohya S
9f644d8dc3
Change default save format to safetensors
2023-01-24 20:16:21 +09:00
Kohya S
36dc97c841
Merge pull request #103 from space-nuko/bucketing-metadata
...
Add bucketing metadata
2023-01-24 19:06:21 +09:00
Kohya S
7f17237ada
Merge pull request #92 from forestsource/add_save_n_epoch_ratio
...
Add save_n_epoch_ratio
2023-01-24 18:59:47 +09:00
space-nuko
2e8a3d20dd
Add tag frequency metadata
2023-01-23 17:43:03 -08:00
space-nuko
66051883fb
Add bucketing metadata
2023-01-23 17:26:58 -08:00
Kohya S
a7218574f2
Update help message
2023-01-22 21:33:48 +09:00
Kohya S
8746188ed7
Add traning_comment metadata.
2023-01-22 18:33:19 +09:00
forestsource
5e817e4343
Add save_n_epoch_ratio
2023-01-22 03:00:28 +09:00
Kohya S
b4636d4185
Add scaling alpha for LoRA
2023-01-21 20:37:34 +09:00
Kohya S
22ee0ac467
Move TE/UN loss calc to train script
2023-01-21 12:51:17 +09:00
Kohya S
17089b1287
Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev
2023-01-21 12:46:20 +09:00
Kohya S
7ee808d5d7
Merge pull request #79 from mgz-dev/tensorboard-improvements
...
expand details in tensorboard logs
2023-01-21 12:46:13 +09:00
Kohya S
9ff26af68b
Update to add grad_ckpting etc to metadata
2023-01-21 12:36:31 +09:00
Kohya S
7dbcef745a
Merge pull request #77 from space-nuko/ss-extra-metadata
...
More helpful metadata
2023-01-21 12:18:23 +09:00
Kohya S
758323532b
add save_last_n_epochs_state to train_network
2023-01-19 20:59:45 +09:00
Kohya S
e6a8c9d269
Fix some LoRA not trained if gradient checkpointing
2023-01-19 20:39:33 +09:00
space-nuko
da48f74e7b
Add new version model/VAE hash to training metadata
2023-01-18 23:00:16 -08:00
michaelgzhang
303c3410e2
expand details in tensorboard logs
...
- Update tensorboard logging to track both unet and textencoder learning rates
- Update tensorboard logging to track both current and moving average epoch loss
- Clean up tensorboard log variable names for dashboard formatting
2023-01-18 13:10:13 -06:00
space-nuko
de1dde1a06
More helpful metadata
...
- dataset/reg image dirs
- random session ID
- keep_tokens
- training date
- output name
2023-01-17 16:28:35 -08:00
Kohya S
aa40cb9345
Add train epochs and max workers option to train
2023-01-15 13:07:47 +09:00