rockerBOO
|
556f3f1696
|
Fix documentation, remove unused function, fix bucket reso for sd1.5, fix multiple datasets
|
2025-01-08 13:41:15 -05:00 |
|
rockerBOO
|
1231f5114c
|
Remove unused train_util code, fix accelerate.log for wandb, add init_trackers library code
|
2025-01-07 22:31:41 -05:00 |
|
rockerBOO
|
695f38962c
|
Move get_huber_threshold_if_needed
|
2025-01-03 15:25:12 -05:00 |
|
rockerBOO
|
0522070d19
|
Fix training, validation split, revert to using upstream implemenation
|
2025-01-03 15:20:25 -05:00 |
|
rockerBOO
|
6604b36044
|
Remove duplicate assignment
|
2025-01-03 02:04:59 -05:00 |
|
rockerBOO
|
fbfc2753eb
|
Update text for train/reg with repeats
|
2025-01-03 01:53:12 -05:00 |
|
rockerBOO
|
c8c3569df2
|
Cleanup order, types, print to logger
|
2025-01-03 01:26:45 -05:00 |
|
rockerBOO
|
534059dea5
|
Typos and lingering is_train
|
2025-01-03 01:18:15 -05:00 |
|
rockerBOO
|
d23c7322ee
|
Merge remote-tracking branch 'hina/feature/val-loss' into validation-loss-upstream
Modified implementation for process_batch and cleanup validation
recording
|
2025-01-03 00:48:08 -05:00 |
|
rockerBOO
|
7f6e124c7c
|
Merge branch 'gesen2egee/val' into validation-loss-upstream
Modified various implementations to restore original behavior
|
2025-01-02 23:04:38 -05:00 |
|
rockerBOO
|
449c1c5c50
|
Adding modified train_util and config_util
|
2025-01-02 15:59:20 -05:00 |
|
gesen2egee
|
8743532963
|
val
|
2025-01-02 15:57:12 -05:00 |
|
Hina Chen
|
05bb9183fa
|
Add Validation loss for LoRA training
|
2024-12-27 16:47:59 +08:00 |
|
nhamanasu
|
8e378cf03d
|
add RAdamScheduleFree support
|
2024-12-11 19:43:44 +09:00 |
|
Kohya S.
|
e3fd6c52a0
|
Merge pull request #1812 from rockerBOO/tests
Add pytest testing
|
2024-12-02 21:38:43 +09:00 |
|
Kohya S
|
1dc873d9b4
|
update README and clean up code for schedulefree optimizer
|
2024-12-01 22:00:44 +09:00 |
|
Kohya S.
|
14c9ba925f
|
Merge pull request #1811 from rockerBOO/schedule-free-prodigy
Allow unknown schedule-free optimizers to continue to module loader
|
2024-12-01 21:51:25 +09:00 |
|
Kohya S
|
1476040787
|
fix: update help text for huber loss parameters in train_util.py
|
2024-12-01 21:26:39 +09:00 |
|
Kohya S
|
cc11989755
|
fix: refactor huber-loss calculation in multiple training scripts
|
2024-12-01 21:20:28 +09:00 |
|
Kohya S
|
14f642f88b
|
fix: huber_schedule exponential not working on sd3_train.py
|
2024-12-01 13:30:35 +09:00 |
|
recris
|
7b61e9eb58
|
Fix issues found in review (pt 2)
|
2024-11-30 11:36:40 +00:00 |
|
rockerBOO
|
c7cadbc8c7
|
Add pytest testing
|
2024-11-29 15:52:03 -05:00 |
|
rockerBOO
|
928b9393da
|
Allow unknown schedule-free optimizers to continue to module loader
|
2024-11-29 14:12:34 -05:00 |
|
recris
|
740ec1d526
|
Fix issues found in review
|
2024-11-28 20:38:32 +00:00 |
|
recris
|
420a180d93
|
Implement pseudo Huber loss for Flux and SD3
|
2024-11-27 18:37:09 +00:00 |
|
Kohya S
|
2bb0f547d7
|
update grad hook creation to fix TE lr in sd3 fine tuning
|
2024-11-14 19:33:12 +09:00 |
|
Kohya S
|
2cb7a6db02
|
feat: add block swap for FLUX.1/SD3 LoRA training
|
2024-11-12 21:39:13 +09:00 |
|
Kohya S
|
3fe94b058a
|
update comment
|
2024-11-12 08:09:07 +09:00 |
|
sdbds
|
26bd4540a6
|
init
|
2024-11-11 09:25:28 +08:00 |
|
feffy380
|
b3248a8eef
|
fix: sort order when getting image size from cache file
|
2024-11-07 14:31:05 +01:00 |
|
Kohya S
|
9aa6f52ac3
|
Fix memory leak in latent caching. bmp failed to cache
|
2024-11-01 21:43:21 +09:00 |
|
Kohya S
|
1434d8506f
|
Support SD3.5M multi resolutional training
|
2024-10-31 19:58:22 +09:00 |
|
kohya-ss
|
d4f7849592
|
prevent unintended cast for disk cached TE outputs
|
2024-10-27 19:35:56 +09:00 |
|
Kohya S
|
f52fb66e8f
|
Merge branch 'sd3' into sd3_5_support
|
2024-10-25 19:03:58 +09:00 |
|
Kohya S
|
5fba6f514a
|
Merge branch 'dev' into sd3
|
2024-10-25 19:03:27 +09:00 |
|
Kohya S
|
623017f716
|
refactor SD3 CLIP to transformers etc.
|
2024-10-24 19:49:28 +09:00 |
|
catboxanon
|
be14c06267
|
Remove v-pred warnings
Different model architectures, such as SDXL, can take advantage of
v-pred. It doesn't make sense to include these warnings anymore.
|
2024-10-22 12:13:51 -04:00 |
|
Kohya S
|
7fe8e162cb
|
fix to work ControlNetSubset with custom_attributes
|
2024-10-20 08:45:27 +09:00 |
|
Kohya S
|
3cc5b8db99
|
Diff Output Preserv loss for SDXL
|
2024-10-18 20:57:13 +09:00 |
|
kohya-ss
|
886ffb4d65
|
Merge branch 'sd3' into multi-gpu-caching
|
2024-10-13 19:14:06 +09:00 |
|
kohya-ss
|
bfc3a65acd
|
fix to work cache latents/text encoder outputs
|
2024-10-13 19:08:16 +09:00 |
|
kohya-ss
|
2244cf5b83
|
load images in parallel when caching latents
|
2024-10-13 18:22:19 +09:00 |
|
Kohya S
|
c65cf3812d
|
Merge branch 'sd3' into fast_image_sizes
|
2024-10-13 17:31:11 +09:00 |
|
kohya-ss
|
c80c304779
|
Refactor caching in train scripts
|
2024-10-12 20:18:41 +09:00 |
|
kohya-ss
|
ff4083b910
|
Merge branch 'sd3' into multi-gpu-caching
|
2024-10-12 16:39:36 +09:00 |
|
Kohya S.
|
d005652d03
|
Merge pull request #1686 from Akegarasu/sd3
fix: fix some distributed training error in windows
|
2024-10-12 14:33:02 +09:00 |
|
Akegarasu
|
9f4dac5731
|
torch 2.4
|
2024-10-10 14:08:55 +08:00 |
|
Akegarasu
|
3de42b6edb
|
fix: distributed training in windows
|
2024-10-10 14:03:59 +08:00 |
|
Kohya S
|
c2440f9e53
|
fix cond image normlization, add independent LR for control
|
2024-10-03 21:32:21 +09:00 |
|
Kohya S
|
33e942e36e
|
Merge branch 'sd3' into fast_image_sizes
|
2024-10-01 08:38:09 +09:00 |
|