Nir Weingarten
ab716302e4
Added cli argument for wandb session name
2024-01-03 11:52:38 +02:00
Disty0
b9d2181192
Cleanup
2024-01-02 11:51:29 +03:00
Disty0
49148eb36e
Disable Diffusers slicing if device is not XPU
2024-01-02 11:50:08 +03:00
Disty0
479bac447e
Fix typo
2024-01-01 12:51:23 +03:00
Disty0
15d5e78ac2
Update IPEX Libs
2024-01-01 12:44:26 +03:00
Plat
62e7516537
feat: support torch.compile
2023-12-27 02:17:24 +09:00
Kohya S
2186e417ba
fix size of bucket < min_size ref #1008
2023-12-20 22:12:21 +09:00
Kohya S
1519e3067c
Merge pull request #1008 from Cauldrath/zero_height_error
...
Fix zero height buckets
2023-12-20 22:09:04 +09:00
Kohya S
35e5424255
Merge pull request #1007 from Disty0/dev
...
IPEX fix SDPA
2023-12-20 21:53:11 +09:00
Cauldrath
f8360a4831
Fix zero height buckets
...
If max_size is too large relative to max_reso, it will calculate a height of zero for some buckets.
This causes a crash later when it divides the width by the height.
This change also simplifies some math and consolidates the redundant "size" variable into "width".
2023-12-19 18:35:09 -05:00
Disty0
8556b9d7f5
IPEX fix SDPA
2023-12-19 22:59:06 +03:00
Kohya S
3efd90b2ad
fix sampling in training with mutiple gpus ref #989
2023-12-15 22:35:54 +09:00
Disty0
aff05e043f
IPEX support for Torch 2.1 and fix dtype erros
2023-12-13 19:40:38 +03:00
Kohya S
d309a27a51
change option names, add ddp kwargs if needed ref #1000
2023-12-13 21:02:26 +09:00
Kohya S
471d274803
Merge pull request #1000 from Isotr0py/dev
...
Fix multi-gpu SDXL training
2023-12-13 20:52:11 +09:00
Kohya S
35f4c9b5c7
fix an error when keep_tokens_separator is not set ref #975
2023-12-12 21:43:21 +09:00
Kohya S
034a49c69d
Merge pull request #975 from Linaqruf/dev
...
Add keep_tokens_separator as alternative for keep_tokens
2023-12-12 21:28:32 +09:00
Isotr0py
bb5ae389f7
fix DDP SDXL training
2023-12-12 19:58:44 +08:00
Kohya S
9278031e60
Merge branch 'dev' into gradual_latent_hires_fix
2023-12-12 07:49:36 +09:00
Kohya S
4a2cef887c
fix lllite training not working ref #913
2023-12-10 09:23:37 +09:00
Kohya S
42750f7846
fix error on pool_workaround in sdxl TE training ref #994
2023-12-10 09:18:33 +09:00
Kohya S
e8c3a02830
Merge branch 'dev' into gradual_latent_hires_fix
2023-12-08 08:23:53 +09:00
Isotr0py
db84530074
Fix gradients synchronization for multi-GPUs training ( #989 )
...
* delete DDP wrapper
* fix train_db vae and train_network
* fix train_db vae and train_network unwrap
* network grad sync
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com >
2023-12-07 22:01:42 +09:00
Kohya S
72bbaac96d
Merge pull request #985 from Disty0/dev
...
Update IPEX hijacks
2023-12-07 21:39:24 +09:00
Kohya S
5713d63dc5
add temporary workaround for playground-v2
2023-12-06 23:08:02 +09:00
Disty0
dd7bb33ab6
IPEX fix torch.UntypedStorage.is_cuda
2023-12-05 22:18:47 +03:00
Disty0
a9c6182b3f
Cleanup IPEX libs
2023-12-05 19:52:31 +03:00
Disty0
3d70137d31
Disable IPEX attention if the GPU supports 64 bit
2023-12-05 19:40:16 +03:00
Disty0
bce9a081db
Update IPEX hijacks
2023-12-05 14:17:31 +03:00
Kohya S
46cf41cc93
Merge pull request #961 from rockerBOO/attention-processor
...
Add attention processor
2023-12-03 21:24:12 +09:00
Kohya S
81a440c8e8
Merge pull request #955 from xzuyn/paged_adamw
...
Add PagedAdamW
2023-12-03 21:22:38 +09:00
Kohya S
f24a3b5282
show seed in generating samples
2023-12-03 21:15:30 +09:00
Kohya S
383b4a2c3e
Merge pull request #907 from shirayu/add_option_sample_at_first
...
Add option --sample_at_first
2023-12-03 21:00:32 +09:00
Kohya S
df59822a27
Merge pull request #906 from shirayu/accept_scheduler_designation_in_training
...
Accept sampler designation in sampling of training
2023-12-03 20:46:16 +09:00
Kohya S
7a4e50705c
add target_x flag (not sure this impl is correct)
2023-12-03 17:59:41 +09:00
Kohya S
29b6fa6212
add unsharp mask
2023-11-28 22:33:22 +09:00
Furqanil Taqwa
1bdd83a85f
remove unnecessary debug print
2023-11-28 17:26:27 +07:00
Furqanil Taqwa
1624c239c2
added keep_tokens_separator to dynamically keep token for being shuffled
2023-11-28 17:23:55 +07:00
Furqanil Taqwa
4a913ce61e
initialize keep_tokens_separator to dataset config
2023-11-28 17:22:35 +07:00
Kohya S
764e333fa2
make slicing vae compatible with latest diffusers
2023-11-26 18:12:04 +09:00
Kohya S
c61e3bf4c9
make separate U-Net for inference
2023-11-26 18:11:30 +09:00
Kohya S
fc8649d80f
Merge pull request #934 from feffy380/fix-minsnr-vpred-zsnr
...
Fix min-snr-gamma for v-prediction and ZSNR.
2023-11-25 21:19:39 +09:00
Kohya S
6d6d86260b
add Deep Shrink
2023-11-23 19:40:48 +09:00
rockerBOO
c856ea4249
Add attention processor
2023-11-19 12:11:36 -05:00
Kohya S
f312522cef
Merge pull request #913 from KohakuBlueleaf/custom-seperator
...
Add custom seperator for shuffle caption
2023-11-19 21:32:01 +09:00
xzuyn
da5a144589
Add PagedAdamW
2023-11-18 07:47:27 -05:00
feffy380
6b3148fd3f
Fix min-snr-gamma for v-prediction and ZSNR.
...
This fixes min-snr for vpred+zsnr by dividing directly by SNR+1.
The old implementation did it in two steps: (min-snr/snr) * (snr/(snr+1)), which causes division by zero when combined with --zero_terminal_snr
2023-11-07 23:02:25 +01:00
rockerBOO
9c591bdb12
Remove unnecessary subset line from collate
2023-11-05 16:58:20 -05:00
rockerBOO
3de9e6c443
Add validation split of datasets
2023-11-05 12:37:44 -05:00
rockerBOO
5b19bda85c
Add validation loss
2023-11-05 12:35:46 -05:00