Kohya S
7f17237ada
Merge pull request #92 from forestsource/add_save_n_epoch_ratio
...
Add save_n_epoch_ratio
2023-01-24 18:59:47 +09:00
Kohya S
a7218574f2
Update help message
2023-01-22 21:33:48 +09:00
Kohya S
8746188ed7
Add traning_comment metadata.
2023-01-22 18:33:19 +09:00
forestsource
5e817e4343
Add save_n_epoch_ratio
2023-01-22 03:00:28 +09:00
Kohya S
b4636d4185
Add scaling alpha for LoRA
2023-01-21 20:37:34 +09:00
Kohya S
22ee0ac467
Move TE/UN loss calc to train script
2023-01-21 12:51:17 +09:00
Kohya S
17089b1287
Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev
2023-01-21 12:46:20 +09:00
Kohya S
7ee808d5d7
Merge pull request #79 from mgz-dev/tensorboard-improvements
...
expand details in tensorboard logs
2023-01-21 12:46:13 +09:00
Kohya S
9ff26af68b
Update to add grad_ckpting etc to metadata
2023-01-21 12:36:31 +09:00
Kohya S
7dbcef745a
Merge pull request #77 from space-nuko/ss-extra-metadata
...
More helpful metadata
2023-01-21 12:18:23 +09:00
Kohya S
758323532b
add save_last_n_epochs_state to train_network
2023-01-19 20:59:45 +09:00
Kohya S
e6a8c9d269
Fix some LoRA not trained if gradient checkpointing
2023-01-19 20:39:33 +09:00
space-nuko
da48f74e7b
Add new version model/VAE hash to training metadata
2023-01-18 23:00:16 -08:00
michaelgzhang
303c3410e2
expand details in tensorboard logs
...
- Update tensorboard logging to track both unet and textencoder learning rates
- Update tensorboard logging to track both current and moving average epoch loss
- Clean up tensorboard log variable names for dashboard formatting
2023-01-18 13:10:13 -06:00
space-nuko
de1dde1a06
More helpful metadata
...
- dataset/reg image dirs
- random session ID
- keep_tokens
- training date
- output name
2023-01-17 16:28:35 -08:00
Kohya S
aa40cb9345
Add train epochs and max workers option to train
2023-01-15 13:07:47 +09:00
Kohya S
e4f9b2b715
Add VAE to meatada, add no_metadata option
2023-01-11 23:12:18 +09:00
space-nuko
0c4423d9dc
Add epoch number to metadata
2023-01-10 02:50:04 -08:00
space-nuko
2e4ce0fdff
Add training metadata to output LoRA model
2023-01-10 02:49:52 -08:00
Kohya S
fbaf373c8a
fix gradient accum not used for lr schduler
2023-01-09 13:13:37 +09:00
Kohya S
6b62c44022
fix errors in fine tuning
2023-01-08 21:40:40 +09:00
Kohya S
9f1d3aca24
add save_state_on_train end, fix reg imgs repeats
2023-01-07 20:20:37 +09:00
Kohya S
f56988b252
unify dataset and save functions
2023-01-05 08:10:22 +09:00
Kohya S
4c35006731
split common function from train_network to util
2023-01-03 20:22:25 +09:00
Kohya S
6b522b34c1
move code for xformers to train_util
2023-01-02 16:08:21 +09:00
Yuta Hayashibe
0f20453997
Fix typos
2022-12-31 14:39:36 +09:00
Kohya S
64de791f2c
set default dataset_repeats to 1, resolution check
2022-12-29 20:12:09 +09:00
Kohya S
445b34de1f
Add LoRA training/generating.
2022-12-25 21:34:59 +09:00