Commit Graph

39 Commits

Author SHA1 Message Date
bmaltais 2eddd64b90 Merge latest sd-script updates 2023-04-01 07:14:25 -04:00
bmaltais 9f6e0c1c8f Fix issue with LyCORIS version 2023-03-30 07:23:37 -04:00
Bernard Maltais ac5eccbaca Add MacOS support 2023-03-24 22:39:45 -04:00
bmaltais acf7d4785f Add support for custom user gui startup files 2023-03-24 13:26:29 -04:00
bmaltais 1c8d901c3b Update to latest sd-scripts updates 2023-03-21 20:20:57 -04:00
bmaltais 9f8c1e9660
Merge pull request #399 from zrma/feature/fix_gui.sh
modify gui.sh to validate requirements and apply args
2023-03-19 20:05:41 -04:00
zrma 6bfdbaf3aa
modify gui.sh to validate requirements and apply args 2023-03-19 23:22:42 +09:00
bmaltais baf009d2b1 Fix basic captioning logic 2023-03-15 19:31:52 -04:00
bmaltais 91e19ca9d9 Fix issue with kohya locon not training the convolution layers 2023-03-12 20:36:58 -04:00
bmaltais 79c2c2debe Add validation that all requirements are met 2023-03-12 10:11:41 -04:00
bmaltais 2deddd5f3c Update to sd-script latest update 2023-03-09 11:06:59 -05:00
bmaltais 819a5718ea Add new lora_resize tool under tools 2023-03-04 09:52:14 -05:00
bmaltais c29f96a1f5 Add extract locon tool 2023-03-04 08:04:49 -05:00
bmaltais 5498539fda Fix typos 2023-03-01 19:20:05 -05:00
bmaltais 1e3055c895 Update tensorboard 2023-03-01 13:14:47 -05:00
bmaltais 60ad22733c Update to latest code version 2023-02-23 19:21:30 -05:00
bmaltais f9863e3950 add dadapation to other trainers 2023-02-16 19:33:46 -05:00
bmaltais 261b6790ee Update tool 2023-02-12 07:02:05 -05:00
bmaltais a49fb9cb8c 2023/02/11 (v20.7.2):
- ``lora_interrogator.py`` is added in ``networks`` folder. See ``python networks\lora_interrogator.py -h`` for usage.
        - For LoRAs where the activation word is unknown, this script compares the output of Text Encoder after applying LoRA to that of unapplied to find out which token is affected by LoRA. Hopefully you can figure out the activation word. LoRA trained with captions does not seem to be able to interrogate.
        - Batch size can be large (like 64 or 128).
    - ``train_textual_inversion.py`` now supports multiple init words.
    - Following feature is reverted to be the same as before. Sorry for confusion:
        > Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images.
    - Add new tool to sort, group and average crop image in a dataset
2023-02-11 11:59:38 -05:00
bmaltais 7bc93821a0 2023/02/09 (v20.7.1)
- Caption dropout is supported in ``train_db.py``, ``fine_tune.py`` and ``train_network.py``. Thanks to forestsource!
        - ``--caption_dropout_rate`` option specifies the dropout rate for captions (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the image is trained with the empty caption. Default is 0 (no dropout).
        - ``--caption_dropout_every_n_epochs`` option specifies how many epochs to drop captions. If ``3`` is specified, in epoch 3, 6, 9 ..., images are trained with all captions empty. Default is None (no dropout).
        - ``--caption_tag_dropout_rate`` option specified the dropout rate for tags (comma separated tokens) (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the tag is removed from the caption. If ``--keep_tokens`` option is set, these tokens (tags) are not dropped. Default is 0 (no droupout).
        - The bulk image downsampling script is added. Documentation is [here](https://github.com/kohya-ss/sd-scripts/blob/main/train_network_README-ja.md#%E7%94%BB%E5%83%8F%E3%83%AA%E3%82%B5%E3%82%A4%E3%82%BA%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%97%E3%83%88) (in Jpanaese). Thanks to bmaltais!
        - Typo check is added. Thanks to shirayu!
    - Add option to autolaunch the GUI in a browser and set the server_port. USe either `gui.ps1 --inbrowser --server_port 3456`or `gui.cmd -inbrowser -server_port 3456`
2023-02-09 19:17:24 -05:00
bmaltais 90c0d55457 2023/02/09 (v20.7.1)
- Caption dropout is supported in ``train_db.py``, ``fine_tune.py`` and ``train_network.py``. Thanks to forestsource!
        - ``--caption_dropout_rate`` option specifies the dropout rate for captions (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the image is trained with the empty caption. Default is 0 (no dropout).
        - ``--caption_dropout_every_n_epochs`` option specifies how many epochs to drop captions. If ``3`` is specified, in epoch 3, 6, 9 ..., images are trained with all captions empty. Default is None (no dropout).
        - ``--caption_tag_dropout_rate`` option specified the dropout rate for tags (comma separated tokens) (0~1.0, 0.1 means 10% chance for dropout). If dropout occurs, the tag is removed from the caption. If ``--keep_tokens`` option is set, these tokens (tags) are not dropped. Default is 0 (no droupout).
        - The bulk image downsampling script is added. Documentation is [here](https://github.com/kohya-ss/sd-scripts/blob/main/train_network_README-ja.md#%E7%94%BB%E5%83%8F%E3%83%AA%E3%82%B5%E3%82%A4%E3%82%BA%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%97%E3%83%88) (in Jpanaese). Thanks to bmaltais!
        - Typo check is added. Thanks to shirayu!
    - Add option to autolaunch the GUI in a browser and set the server_port. USe either `gui.ps1 --inbrowser --server_port 3456`or `gui.cmd -inbrowser -server_port 3456`
2023-02-09 19:17:17 -05:00
bmaltais 09d3a72cd8 Adding support for caption dropout 2023-02-07 20:58:35 -05:00
bmaltais 8d559ded18 * 2023/02/06 (v20.7.0)
- ``--bucket_reso_steps`` and ``--bucket_no_upscale`` options are added to training scripts (fine tuning, DreamBooth, LoRA and Textual Inversion) and ``prepare_buckets_latents.py``.
    - ``--bucket_reso_steps`` takes the steps for buckets in aspect ratio bucketing. Default is 64, same as before.
        - Any value greater than or equal to 1 can be specified; 64 is highly recommended and a value divisible by 8 is recommended.
        - If less than 64 is specified, padding will occur within U-Net. The result is unknown.
        - If you specify a value that is not divisible by 8, it will be truncated to divisible by 8 inside VAE, because the size of the latent is 1/8 of the image size.
    - If ``--bucket_no_upscale`` option is specified, images smaller than the bucket size will be processed without upscaling.
        - Internally, a bucket smaller than the image size is created (for example, if the image is 300x300 and ``bucket_reso_steps=64``, the bucket is 256x256). The image will be trimmed.
        - Implementation of [#130](https://github.com/kohya-ss/sd-scripts/issues/130).
        - Images with an area larger than the maximum size specified by ``--resolution`` are downsampled to the max bucket size.
    - Now the number of data in each batch is limited to the number of actual images (not duplicated). Because a certain bucket may contain smaller number of actual images, so the batch may contain same (duplicated) images.
    - ``--random_crop`` now also works with buckets enabled.
        - Instead of always cropping the center of the image, the image is shifted left, right, up, and down to be used as the training data. This is expected to train to the edges of the image.
        - Implementation of discussion [#34](https://github.com/kohya-ss/sd-scripts/discussions/34).
2023-02-06 11:04:07 -05:00
bmaltais cbfc311687 Integrate new bucket parameters in GUI 2023-02-05 20:07:00 -05:00
bmaltais 20e62af1a6 Update to latest kohya_ss sd-script code 2023-02-03 14:40:03 -05:00
bmaltais 2ec7432440 Fix issue 81:
https://github.com/bmaltais/kohya_ss/issues/81
2023-01-29 11:17:30 -05:00
bmaltais bc8a4757f8 Sync with kohya 2023/01/29 update 2023-01-29 11:10:06 -05:00
bmaltais 6aed2bb402 Add support for new arguments:
- max_train_epochs
- max_data_loader_n_workers
Move some of the codeto  common gui library.
2023-01-15 11:05:22 -05:00
bmaltais b8100b1a0a - Add support for `--clip_skip` option
- Add missing `detect_face_rotate.py` to tools folder
- Add `gui.cmd` for easy start of GUI
2023-01-05 19:16:13 -05:00
bmaltais 2cdf4cf741 - Fix for conversion tool issue when the source was an sd1.x diffuser model
- Other minor code and GUI fix
2022-12-23 07:56:35 -05:00
bmaltais 706dfe157f
Merge dreambooth and finetuning in one repo to align with kohya_ss new repo (#10)
* Merge both dreambooth and finetune back in one repo
2022-12-20 09:15:17 -05:00
bmaltais c90aa2cc61 - Fix file/folder opening behind the browser window
- Add WD14 and BLIP captioning to utilities
- Improve overall GUI layout
2022-12-19 09:22:52 -05:00
bmaltais 0ca93a7aa7 v18.1: Model conversion utility 2022-12-18 13:11:10 -05:00
bmaltais 5f1a465a45 Update to v17
New GUI
2022-12-13 13:49:14 -05:00
bmaltais 449a35368f Update model conversion util 2022-12-05 11:13:41 -05:00
bmaltais e8db30b9d1 Publish v15 2022-12-05 10:49:02 -05:00
bmaltais 37133218bf Adding conversion tool doc 2022-12-03 06:28:27 -05:00
bmaltais 231030b3b2 Add tool to convert diffusers 2.0 model to ckpt 2022-12-03 06:23:25 -05:00
bmaltais 621dabcadf Add prunt tool 2022-12-01 19:06:33 -05:00