Kohya S
758323532b
add save_last_n_epochs_state to train_network
2023-01-19 20:59:45 +09:00
Kohya S
e6a8c9d269
Fix some LoRA not trained if gradient checkpointing
2023-01-19 20:39:33 +09:00
space-nuko
da48f74e7b
Add new version model/VAE hash to training metadata
2023-01-18 23:00:16 -08:00
michaelgzhang
303c3410e2
expand details in tensorboard logs
...
- Update tensorboard logging to track both unet and textencoder learning rates
- Update tensorboard logging to track both current and moving average epoch loss
- Clean up tensorboard log variable names for dashboard formatting
2023-01-18 13:10:13 -06:00
space-nuko
de1dde1a06
More helpful metadata
...
- dataset/reg image dirs
- random session ID
- keep_tokens
- training date
- output name
2023-01-17 16:28:35 -08:00
Kohya S
aa40cb9345
Add train epochs and max workers option to train
2023-01-15 13:07:47 +09:00
Kohya S
e4f9b2b715
Add VAE to meatada, add no_metadata option
2023-01-11 23:12:18 +09:00
space-nuko
0c4423d9dc
Add epoch number to metadata
2023-01-10 02:50:04 -08:00
space-nuko
2e4ce0fdff
Add training metadata to output LoRA model
2023-01-10 02:49:52 -08:00
Kohya S
fbaf373c8a
fix gradient accum not used for lr schduler
2023-01-09 13:13:37 +09:00
Kohya S
6b62c44022
fix errors in fine tuning
2023-01-08 21:40:40 +09:00
Kohya S
9f1d3aca24
add save_state_on_train end, fix reg imgs repeats
2023-01-07 20:20:37 +09:00
Kohya S
f56988b252
unify dataset and save functions
2023-01-05 08:10:22 +09:00
Kohya S
4c35006731
split common function from train_network to util
2023-01-03 20:22:25 +09:00
Kohya S
6b522b34c1
move code for xformers to train_util
2023-01-02 16:08:21 +09:00
Yuta Hayashibe
0f20453997
Fix typos
2022-12-31 14:39:36 +09:00
Kohya S
64de791f2c
set default dataset_repeats to 1, resolution check
2022-12-29 20:12:09 +09:00
Kohya S
445b34de1f
Add LoRA training/generating.
2022-12-25 21:34:59 +09:00