From 917087c5b6ff7e7efadf0f9584defd66c4d899eb Mon Sep 17 00:00:00 2001 From: Kaleith Date: Mon, 12 Sep 2022 17:14:05 +0200 Subject: [PATCH 1/3] Update README - Documented a couple of new optional flags - Added a link to a third party repository of embeddings - Reworded a few parts - Fixed some typos --- README.md | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index 9bde1f2a..f429f5a4 100644 --- a/README.md +++ b/README.md @@ -19,6 +19,7 @@ A browser interface based on Gradio library for Stable Diffusion. - Textual Inversion - Extras tab with: - GFPGAN, neural network that fixes faces + - CodeFormer, face restoration tool as an alternative to GFPGAN - RealESRGAN, neural network upscaler - ESRGAN, neural network with a lot of third party models - Resizing aspect ratio options @@ -53,10 +54,10 @@ can obtain it from the following places: - [file storage](https://drive.yerf.org/wl/?id=EBfTrmcCCUAGaQBXVIj5lJmEhjoP1tgl) - magnet:?xt=urn:btih:3a4a612d75ed088ea542acac52f9f45987488d1c&dn=sd-v1-4.ckpt&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337 -You optionally can use GFPGAN to improve faces, then you'll need to download the model from [here](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth). +You can optionally use GFPGAN to improve faces, to do so you'll need to download the model from [here](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) and place it in the same directory as `webui.bat`. To use ESRGAN models, put them into ESRGAN directory in the same location as webui.py. A file will be loaded -as model if it has .pth extension. Grab models from the [Model Database](https://upscale.wiki/wiki/Model_Database). +as a model if it has .pth extension, and it will show up with its name in the UI. Grab models from the [Model Database](https://upscale.wiki/wiki/Model_Database). > Note: RealESRGAN models are not ESRGAN models, they are not compatible. Do not download RealESRGAN models. Do not place RealESRGAN into the directory with ESRGAN models. Thank you. @@ -85,10 +86,10 @@ may help (but I still recommend you to just use the recommended version of pytho floating point numbers (Known issue with 16xx cards). You must use `--precision full --no-half` in addition to command line arguments (set them using `set COMMANDLINE_ARGS`, see below), and the model will take much more space in VRAM (you will likely have to also use at least `--medvram`). -- installer creates python virtual environment, so none of installed modules will affect your system installation of python if +- the installer creates a python virtual environment, so none of the installed modules will affect your system installation of python if you had one prior to installing this. - About _"You must install this exact version"_ from the instructions above: you can use any version of python you like, -and it will likely work, but if you want to seek help about things not working, I will not offer help unless you this +and it will likely work, but if you want to seek help about things not working, I will not offer help unless you use this exact version for my sanity. #### How to run with custom parameters @@ -96,7 +97,7 @@ exact version for my sanity. It's possible to edit `set COMMANDLINE_ARGS=` line in `webui.bat` to run the program with different command line arguments, but that may lead to inconveniences when the file is updated in the repository. -The recommndended way is to use another .bat file named anything you like, set the parameters you want in it, and run webui.bat from it. +The recommended way is to use another .bat file named anything you like, set the parameters you want in it, and run webui.bat from it. A `webui-user.bat` file included into the repository does exactly this. Here is an example that runs the prgoram with `--opt-split-attention` argument: @@ -109,7 +110,7 @@ set COMMANDLINE_ARGS=--opt-split-attention call webui.bat ``` -Another example, this file will run the program with custom python path, a different model named `a.ckpt` and without virtual environment: +Another example, this file will run the program with a custom python path, a different model named `a.ckpt` and without a virtual environment: ```commandline @echo off @@ -137,7 +138,7 @@ also but the effect will likely be barely noticeable. ### Running online Use `--share` option to run online. You will get a xxx.app.gradio link. This is the intended way to use the -program in collabs. +program in collabs. You may also use `--gradio-username abc` and `--gradio-password xyz` to set up authentication for said gradio shared instance. Use `--listen` to make the server listen to network connections. This will allow computers on local newtork to access the UI, and if you configure port forwarding, also computers on the internet. @@ -148,19 +149,21 @@ Defaults to port 7860 if available. ### Google collab -If you don't want or can't run locally, here is google collab that allows you to run the webui: +If you don't want or can't run locally, here is a Google colab that allows you to run the webui: https://colab.research.google.com/drive/1Iy-xW9t1-OQWhb0hNxueGij8phCyluOh ### Textual Inversion -To make use of pretrained embeddings, create `embeddings` directory (in the same palce as `webui.py`) +To make use of pretrained embeddings, create an `embeddings` directory (in the same place as `webui.py`) and put your embeddings into it. They must be .pt files, each with only one trained embedding, -and the filename (without .pt) will be the term you'd use in prompt to get that embedding. +and the filename (without .pt) will be the term you'll use in the prompt to get that embedding. As an example, I trained one for about 5000 steps: https://files.catbox.moe/e2ui6r.pt; it does not produce very good results, but it does work. Download and rename it to Usada Pekora.pt, and put it into embeddings dir and use Usada Pekora in prompt. +You may also try some from the growing library of embeddings at https://huggingface.co/sd-concepts-library, downloading the `learned_embeds.bin` files, renaming them to the related term found in `token_identifier.txt` (without the < and >) and putting them in your `embeddings` directory. + ### How to change UI defaults? After running once, a `ui-config.json` file appears in webui directory: @@ -283,7 +286,7 @@ wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pt After that follow the instructions in the `Manual instructions` section starting at step `:: clone repositories for Stable Diffusion and (optionally) CodeFormer`. -### img2img alterantive test +### img2img alternative test - see [this post](https://www.reddit.com/r/StableDiffusion/comments/xboy90/a_better_way_of_doing_img2img_by_finding_the/) on ebaumsworld.com for context. - find it in scripts section - put description of input image into the Original prompt field From 12e326ae9a6c43940ae51a9b5ce6b009b9c16f18 Mon Sep 17 00:00:00 2001 From: Kaleith Date: Mon, 12 Sep 2022 18:50:57 +0200 Subject: [PATCH 2/3] Updated README - small update to reflect the revised commit for gradio authentication --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index f429f5a4..4c699d15 100644 --- a/README.md +++ b/README.md @@ -138,7 +138,7 @@ also but the effect will likely be barely noticeable. ### Running online Use `--share` option to run online. You will get a xxx.app.gradio link. This is the intended way to use the -program in collabs. You may also use `--gradio-username abc` and `--gradio-password xyz` to set up authentication for said gradio shared instance. +program in collabs. You may set up authentication for said gradio shared instance with the flag `--gradio-auth username:password`, optionally providing multiple sets of usernames and passwords separated by commas. Use `--listen` to make the server listen to network connections. This will allow computers on local newtork to access the UI, and if you configure port forwarding, also computers on the internet. From 7e03c71346e8edb9747a5dec3bc81c77160a144f Mon Sep 17 00:00:00 2001 From: Kaleith Date: Mon, 12 Sep 2022 19:33:02 +0200 Subject: [PATCH 3/3] Update README.md Revision of the previous commit to make it less misleading and mention the option of using .bin files --- README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 4c699d15..acbdf435 100644 --- a/README.md +++ b/README.md @@ -155,14 +155,14 @@ https://colab.research.google.com/drive/1Iy-xW9t1-OQWhb0hNxueGij8phCyluOh ### Textual Inversion To make use of pretrained embeddings, create an `embeddings` directory (in the same place as `webui.py`) -and put your embeddings into it. They must be .pt files, each with only one trained embedding, -and the filename (without .pt) will be the term you'll use in the prompt to get that embedding. +and put your embeddings into it. They must be either .pt or .bin files, each with only one trained embedding, +and the filename (without .pt/.bin) will be the term you'll use in the prompt to get that embedding. As an example, I trained one for about 5000 steps: https://files.catbox.moe/e2ui6r.pt; it does not produce -very good results, but it does work. Download and rename it to Usada Pekora.pt, and put it into embeddings dir -and use Usada Pekora in prompt. +very good results, but it does work. To try it out download the file, rename it to `Usada Pekora.pt`, put it into the `embeddings` dir +and use `Usada Pekora` in the prompt. -You may also try some from the growing library of embeddings at https://huggingface.co/sd-concepts-library, downloading the `learned_embeds.bin` files, renaming them to the related term found in `token_identifier.txt` (without the < and >) and putting them in your `embeddings` directory. +You may also try some from the growing library of embeddings at https://huggingface.co/sd-concepts-library, downloading one of the `learned_embeds.bin` files, renaming it to the term you want to use for it in the prompt (be sure to keep the .bin extension) and putting it in your `embeddings` directory. ### How to change UI defaults?