Ed McManus
de4bb657b0
Update utils.py
...
Cleanup
2024-09-19 14:38:32 -07:00
Ed McManus
3957372ded
Retain alpha in pil_resize
...
Currently the alpha channel is dropped by `pil_resize()` when `--alpha_mask` is supplied and the image width does not exceed the bucket.
This codepath is entered on the last line, here:
```
def trim_and_resize_if_required(
random_crop: bool, image: np.ndarray, reso, resized_size: Tuple[int, int]
) -> Tuple[np.ndarray, Tuple[int, int], Tuple[int, int, int, int]]:
image_height, image_width = image.shape[0:2]
original_size = (image_width, image_height) # size before resize
if image_width != resized_size[0] or image_height != resized_size[1]:
# リサイズする
if image_width > resized_size[0] and image_height > resized_size[1]:
image = cv2.resize(image, resized_size, interpolation=cv2.INTER_AREA) # INTER_AREAでやりたいのでcv2でリサイズ
else:
image = pil_resize(image, resized_size)
```
2024-09-19 14:30:03 -07:00
Kohya S
b844c70d14
Merge branch 'dev' into sd3
2024-09-19 21:51:33 +09:00
Maru-mee
e7040669bc
Bug fix: alpha_mask load
2024-09-19 15:47:06 +09:00
Kohya S
1286e00bb0
fix to call train/eval in schedulefree #1605
2024-09-18 21:31:54 +09:00
Kohya S.
bbd160b4ca
sd3 schedule free opt ( #1605 )
...
* New ScheduleFree support for Flux (#1600 )
* init
* use no schedule
* fix typo
* update for eval()
* fix typo
* update
* Update train_util.py
* Update requirements.txt
* update sfwrapper WIP
* no need to check schedulefree optimizer
* remove debug print
* comment out schedulefree wrapper
* update readme
---------
Co-authored-by: 青龍聖者@bdsqlsz <865105819@qq.com >
2024-09-18 07:55:04 +09:00
Kohya S
96c677b459
fix to work lienar/cosine lr scheduler closes #1602 ref #1393
2024-09-16 10:42:09 +09:00
Plat
a823fd9fb8
Improve wandb logging ( #1576 )
...
* fix: wrong training steps were recorded to wandb, and no log was sent when logging_dir was not specified
* fix: checking of whether wandb is enabled
* feat: log images to wandb with their positive prompt as captions
* feat: logging sample images' caption for sd3 and flux
* fix: import wandb before use
2024-09-11 22:21:16 +09:00
Kohya S
c7c666b182
fix typo
2024-09-11 22:12:31 +09:00
cocktailpeanut
8311e88225
typo fix
2024-09-11 09:02:29 -04:00
Kohya S
eaafa5c9da
Merge branch 'dev' into sd3
2024-09-11 21:46:21 +09:00
Kohya S
6dbfd47a59
Fix to work PIECEWISE_CONSTANT, update requirement.txt and README #1393
2024-09-11 21:44:36 +09:00
青龍聖者@bdsqlsz
fd68703f37
Add New lr scheduler ( #1393 )
...
* add new lr scheduler
* fix bugs and use num_cycles / 2
* Update requirements.txt
* add num_cycles for min lr
* keep PIECEWISE_CONSTANT
* allow use float with warmup or decay ratio.
* Update train_util.py
2024-09-11 21:25:45 +09:00
Kohya S
ce144476cf
Merge branch 'dev' into sd3
2024-09-07 10:59:22 +09:00
Kohya S
62ec3e6424
Merge branch 'main' into dev
2024-09-07 10:52:49 +09:00
Kohya S
0005867ba5
update README, format code
2024-09-07 10:45:18 +09:00
Kohya S.
16bb5699ac
Merge pull request #1426 from sdbds/resize
...
Replacing CV2 resize to Pil resize
2024-09-07 10:22:52 +09:00
Kohya S.
319e4d9831
Merge pull request #1433 from millie-v/sample-image-without-cuda
...
Generate sample images without having CUDA (such as on Macs)
2024-09-07 10:19:55 +09:00
Kohya S
b65ae9b439
T5XXL LoRA training, fp8 T5XXL support
2024-09-04 21:33:17 +09:00
Kohya S
4f6d915d15
update help and README
2024-09-01 19:12:29 +09:00
sdbds
25c9040f4f
Update flux_train_utils.py
2024-08-31 19:53:59 +08:00
Nando Metzger
2a3aefb4e4
Update train_util.py, bug fix
2024-08-30 08:15:05 +02:00
Kohya S
3be712e3e0
feat: Update direct loading fp8 ckpt for LoRA training
2024-08-27 21:40:02 +09:00
Kohya S
0087a46e14
FLUX.1 LoRA supports CLIP-L
2024-08-27 19:59:40 +09:00
Kohya S
72287d39c7
feat: Add shift option to --timestep_sampling in FLUX.1 fine-tuning and LoRA training
2024-08-25 16:01:24 +09:00
Kohya S
2e89cd2cc6
Fix issue with attention mask not being applied in single blocks
2024-08-24 12:39:54 +09:00
Kohya S
81411a398e
speed up getting image sizes
2024-08-22 22:02:29 +09:00
Kohya S
2d8fa3387a
Fix to remove zero pad for t5xxl output
2024-08-22 19:56:27 +09:00
kohya-ss
98c91a7625
Fix bug in FLUX multi GPU training
2024-08-22 12:37:41 +09:00
Kohya S
e1cd19c0c0
add stochastic rounding, fix single block
2024-08-21 21:04:10 +09:00
Kohya S
2b07a92c8d
Fix error in applying mask in Attention and add LoRA converter script
2024-08-21 12:30:23 +09:00
Kohya S
7e459c00b2
Update T5 attention mask handling in FLUX
2024-08-21 08:02:33 +09:00
Kohya S
6ab48b09d8
feat: Support multi-resolution training with caching latents to disk
2024-08-20 21:39:43 +09:00
Kohya S
486fe8f70a
feat: reduce memory usage and add memory efficient option for model saving
2024-08-19 22:30:24 +09:00
Kohya S
6e72a799c8
reduce peak VRAM usage by excluding some blocks to cuda
2024-08-19 21:55:28 +09:00
Kohya S
ef535ec6bb
add memory efficient training for FLUX.1
2024-08-18 16:54:18 +09:00
Kohya S
400955d3ea
add fine tuning FLUX.1 (WIP)
2024-08-17 15:36:18 +09:00
Kohya S
e45d3f8634
add merge LoRA script
2024-08-16 22:19:21 +09:00
Kohya S
3921a4efda
add t5xxl max token length, support schnell
2024-08-16 17:06:05 +09:00
Kohya S
7db4222119
add sample image generation during training
2024-08-14 22:15:26 +09:00
Kohya S
56d7651f08
add experimental split mode for FLUX
2024-08-13 22:28:39 +09:00
kohya-ss
f5ce754bc2
Merge branch 'dev' into sd3
2024-08-13 21:00:44 +09:00
Kohya S
d25ae361d0
fix apply_t5_attn_mask to work
2024-08-11 19:07:07 +09:00
Kohya S
8a0f12dde8
update FLUX LoRA training
2024-08-10 23:42:05 +09:00
Kohya S
808d2d1f48
fix typos
2024-08-09 23:02:51 +09:00
Kohya S
36b2e6fc28
add FLUX.1 LoRA training
2024-08-09 22:56:48 +09:00
Kohya S
da4d0fe016
support attn mask for l+g/t5
2024-08-05 20:51:34 +09:00
Kohya S
231df197dd
Fix npz path for verification
2024-08-05 20:26:30 +09:00
Kohya S
002d75179a
sample images for training
2024-07-29 23:18:34 +09:00
Kohya S
1a977e847a
fix typos
2024-07-27 13:51:50 +09:00