Kohya S
efb2a128cd
fix wandb val logging
2025-02-21 22:07:35 +09:00
Yidi
13df47516d
Remove position_ids for V2
...
The postions_ids cause errors for the newer version of transformer.
This has already been fixed in convert_ldm_clip_checkpoint_v1() but
not in v2.
The new code applies the same fix to convert_ldm_clip_checkpoint_v2().
2025-02-20 04:49:51 -05:00
rockerBOO
7f2747176b
Use resize_image where resizing is required
2025-02-19 14:20:40 -05:00
rockerBOO
545425c13e
Typo
2025-02-19 14:20:40 -05:00
rockerBOO
d0128d18be
Add resize interpolation CLI option
2025-02-19 14:20:40 -05:00
rockerBOO
58e9e146a3
Add resize interpolation configuration
2025-02-19 14:20:40 -05:00
Kohya S
dc7d5fb459
Merge branch 'sd3' into val-loss-improvement
2025-02-18 21:34:30 +09:00
rockerBOO
9436b41061
Fix validation split and add test
2025-02-17 14:28:41 -05:00
rockerBOO
f3a010978c
Clear sizes for validation reg images to be consistent
2025-02-16 22:28:34 -05:00
rockerBOO
3c7496ae3f
Fix sizes for validation split
2025-02-16 22:18:14 -05:00
Kohya S
a24db1d532
fix: validation timestep generation fails on SD/SDXL training
2025-02-04 22:02:42 +09:00
tsukimiya
4a71687d20
不要な警告の削除
...
(おそらく be14c06267 の修正漏れ )
2025-02-04 00:42:27 +09:00
Kohya S
58b82a576e
Fix to work with validation dataset
2025-01-26 21:21:21 +09:00
rockerBOO
c04e5dfe92
Fix loss recorder on 0. Fix validation for cached runs. Assert on validation dataset
2025-01-23 09:57:24 -05:00
rockerBOO
b489082495
Disable repeats for validation datasets
2025-01-12 16:42:04 -05:00
rockerBOO
2bbb40ce51
Fix regularization images with validation
...
Adding metadata recording for validation arguments
Add comments about the validation split for clarity of intention
2025-01-12 14:29:50 -05:00
rockerBOO
264167fa16
Apply is_training_dataset only to DreamBoothDataset. Add validation_split check and warning
2025-01-09 12:43:58 -05:00
rockerBOO
9fde0d7972
Handle tuple return from generate_dataset_group_by_blueprint
2025-01-08 18:38:20 -05:00
rockerBOO
556f3f1696
Fix documentation, remove unused function, fix bucket reso for sd1.5, fix multiple datasets
2025-01-08 13:41:15 -05:00
rockerBOO
1231f5114c
Remove unused train_util code, fix accelerate.log for wandb, add init_trackers library code
2025-01-07 22:31:41 -05:00
Kohya S
a9c5aa1f93
add CFG to FLUX.1 sample image
2025-01-05 22:28:51 +09:00
rockerBOO
695f38962c
Move get_huber_threshold_if_needed
2025-01-03 15:25:12 -05:00
rockerBOO
0522070d19
Fix training, validation split, revert to using upstream implemenation
2025-01-03 15:20:25 -05:00
rockerBOO
6604b36044
Remove duplicate assignment
2025-01-03 02:04:59 -05:00
rockerBOO
fbfc2753eb
Update text for train/reg with repeats
2025-01-03 01:53:12 -05:00
rockerBOO
c8c3569df2
Cleanup order, types, print to logger
2025-01-03 01:26:45 -05:00
rockerBOO
534059dea5
Typos and lingering is_train
2025-01-03 01:18:15 -05:00
rockerBOO
d23c7322ee
Merge remote-tracking branch 'hina/feature/val-loss' into validation-loss-upstream
...
Modified implementation for process_batch and cleanup validation
recording
2025-01-03 00:48:08 -05:00
rockerBOO
7f6e124c7c
Merge branch 'gesen2egee/val' into validation-loss-upstream
...
Modified various implementations to restore original behavior
2025-01-02 23:04:38 -05:00
rockerBOO
449c1c5c50
Adding modified train_util and config_util
2025-01-02 15:59:20 -05:00
gesen2egee
8743532963
val
2025-01-02 15:57:12 -05:00
Hina Chen
05bb9183fa
Add Validation loss for LoRA training
2024-12-27 16:47:59 +08:00
nhamanasu
8e378cf03d
add RAdamScheduleFree support
2024-12-11 19:43:44 +09:00
青龍聖者@bdsqlsz
abff4b0ec7
Unify controlnet parameters name and change scripts name. ( #1821 )
...
* Update sd3_train.py
* add freeze block lr
* Update train_util.py
* update
* Revert "add freeze block lr"
This reverts commit 8b1653548f .
# Conflicts:
# library/train_util.py
# sd3_train.py
* use same control net model path
* use controlnet_model_name_or_path
2024-12-07 17:12:46 +09:00
Kohya S
6bee18db4f
fix: resolve model corruption issue with pos_embed when using --enable_scaled_pos_embed
2024-12-07 15:12:27 +09:00
kohya-ss
e369b9a252
docs: update README with FLUX.1 ControlNet training details and improve argument help text
2024-12-02 23:38:54 +09:00
Kohya S.
09a3740f6c
Merge pull request #1813 from minux302/flux-controlnet
...
Add Flux ControlNet
2024-12-02 23:32:16 +09:00
Kohya S.
e3fd6c52a0
Merge pull request #1812 from rockerBOO/tests
...
Add pytest testing
2024-12-02 21:38:43 +09:00
Kohya S
1dc873d9b4
update README and clean up code for schedulefree optimizer
2024-12-01 22:00:44 +09:00
Kohya S.
14c9ba925f
Merge pull request #1811 from rockerBOO/schedule-free-prodigy
...
Allow unknown schedule-free optimizers to continue to module loader
2024-12-01 21:51:25 +09:00
Kohya S
1476040787
fix: update help text for huber loss parameters in train_util.py
2024-12-01 21:26:39 +09:00
Kohya S
cc11989755
fix: refactor huber-loss calculation in multiple training scripts
2024-12-01 21:20:28 +09:00
Kohya S
14f642f88b
fix: huber_schedule exponential not working on sd3_train.py
2024-12-01 13:30:35 +09:00
Kohya S.
a5a27fe4c3
Merge pull request #1808 from recris/huber-loss-flux
...
Implement pseudo Huber loss for Flux and SD3
2024-12-01 13:15:33 +09:00
recris
7b61e9eb58
Fix issues found in review (pt 2)
2024-11-30 11:36:40 +00:00
Kohya S
9c885e549d
fix: improve pos_embed handling for oversized images and update resolution_area_to_latent_size, when sample image size > train image size
2024-11-30 18:25:50 +09:00
rockerBOO
c7cadbc8c7
Add pytest testing
2024-11-29 15:52:03 -05:00
rockerBOO
928b9393da
Allow unknown schedule-free optimizers to continue to module loader
2024-11-29 14:12:34 -05:00
minux302
be5860f8e2
add schnell option to load_cn
2024-11-30 00:08:21 +09:00
minux302
9dff44d785
fix device
2024-11-29 14:40:38 +00:00