Go to file
Aarni Koskela 13f4c62ba3 Add basic ESLint configuration for formatting
This doesn't enable any of ESLint's actual possible-issue linting,
but just style normalization based on the Prettier configuration (but without line length limits).
2023-05-17 16:09:06 +03:00
.github Merge pull request #10379 from AUTOMATIC1111/Sakura-Luna-patch-1 2023-05-14 21:18:55 +03:00
configs disable EMA weights for instructpix2pix model, whcih should get memory usage as well as image quality to what it was before d2ac95fa7b 2023-01-30 13:29:44 +03:00
embeddings add embeddings dir 2022-09-30 14:16:26 +03:00
extensions delete the submodule dir (why do you keep doing this) 2022-10-29 09:02:02 +03:00
extensions-builtin Merge branch 'master' into dev 2023-05-14 13:36:16 +03:00
html add credits 2023-05-17 08:41:21 +03:00
javascript Fix remove `textual inversion` prompt 2023-05-17 10:20:11 +08:00
localizations Remove old localizations from the main repo. 2022-11-08 10:01:27 +03:00
models Add support for the Variations models (unclip-h and unclip-l) 2023-03-25 21:03:07 -04:00
modules Merge branch 'dev' into taesd-a 2023-05-17 09:26:26 +03:00
scripts Merge pull request #10382 from AUTOMATIC1111/fix_xyz_checkpoint 2023-05-14 19:01:36 +03:00
test Reindent utils_test with 4 spaces 2023-05-11 18:26:34 +03:00
textual_inversion_templates hypernetwork training mk1 2022-10-07 23:22:22 +03:00
.eslintignore Add basic ESLint configuration for formatting 2023-05-17 16:09:06 +03:00
.eslintrc.js Add basic ESLint configuration for formatting 2023-05-17 16:09:06 +03:00
.gitignore Add basic ESLint configuration for formatting 2023-05-17 16:09:06 +03:00
.pylintrc Add basic Pylint to catch syntax errors on PRs 2022-10-15 16:26:07 +03:00
CHANGELOG.md update readme for release 2023-05-14 13:34:50 +03:00
CODEOWNERS remove localization people from CODEOWNERS add a note 2022-11-06 10:37:08 +03:00
LICENSE.txt add license file 2023-01-15 09:24:48 +03:00
README.md add credits 2023-05-17 08:41:21 +03:00
environment-wsl2.yaml update xformers 2023-04-03 15:23:35 -04:00
launch.py launch.py: Don't involve shell for running Python or Git for output 2023-05-14 20:39:19 +03:00
package.json Add basic ESLint configuration for formatting 2023-05-17 16:09:06 +03:00
pyproject.toml Autofix Ruff W (not W605) (mostly whitespace) 2023-05-11 20:29:11 +03:00
requirements.txt updates for #9256 2023-05-14 08:30:37 +03:00
requirements_versions.txt updates for #9256 2023-05-14 08:30:37 +03:00
screenshot.png new screenshot 2023-01-07 13:30:06 +03:00
script.js Update script.js 2023-04-03 04:53:29 +02:00
style.css Remove max width for model dropdown 2023-05-16 13:32:32 -07:00
webui-macos-env.sh Set PyTorch version to 2.0.1 for macOS 2023-05-12 11:15:43 -04:00
webui-user.bat revert change to webui-user.bat 2022-11-19 21:06:58 +03:00
webui-user.sh Try using TCMalloc on Linux by default 2023-04-13 10:19:03 +09:00
webui.bat remove the pip install stuff because it does not work as i hoped it would 2023-01-25 00:49:16 +03:00
webui.py return live preview defaults to how they were 2023-05-17 09:24:01 +03:00
webui.sh Merge pull request #9140 from yedpodtrzitko/yed/reuse-existing-venv 2023-05-02 11:05:00 +03:00

README.md

Stable Diffusion web UI

A browser interface based on Gradio library for Stable Diffusion.

Features

Detailed feature showcase with images:

  • Original txt2img and img2img modes
  • One click install and run script (but you still must install python and git)
  • Outpainting
  • Inpainting
  • Color Sketch
  • Prompt Matrix
  • Stable Diffusion Upscale
  • Attention, specify parts of text that the model should pay more attention to
    • a man in a ((tuxedo)) - will pay more attention to tuxedo
    • a man in a (tuxedo:1.21) - alternative syntax
    • select text and press Ctrl+Up or Ctrl+Down to automatically adjust attention to selected text (code contributed by anonymous user)
  • Loopback, run img2img processing multiple times
  • X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
  • Textual Inversion
    • have as many embeddings as you want and use any names you like for them
    • use multiple embeddings with different numbers of vectors per token
    • works with half precision floating point numbers
    • train embeddings on 8GB (also reports of 6GB working)
  • Extras tab with:
    • GFPGAN, neural network that fixes faces
    • CodeFormer, face restoration tool as an alternative to GFPGAN
    • RealESRGAN, neural network upscaler
    • ESRGAN, neural network upscaler with a lot of third party models
    • SwinIR and Swin2SR (see here), neural network upscalers
    • LDSR, Latent diffusion super resolution upscaling
  • Resizing aspect ratio options
  • Sampling method selection
    • Adjust sampler eta values (noise multiplier)
    • More advanced noise setting options
  • Interrupt processing at any time
  • 4GB video card support (also reports of 2GB working)
  • Correct seeds for batches
  • Live prompt token length validation
  • Generation parameters
    • parameters you used to generate images are saved with that image
    • in PNG chunks for PNG, in EXIF for JPEG
    • can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
    • can be disabled in settings
    • drag and drop an image/text-parameters to promptbox
  • Read Generation Parameters Button, loads parameters in promptbox to UI
  • Settings page
  • Running arbitrary python code from UI (must run with --allow-code to enable)
  • Mouseover hints for most UI elements
  • Possible to change defaults/mix/max/step values for UI elements via text config
  • Tiling support, a checkbox to create images that can be tiled like textures
  • Progress bar and live image generation preview
    • Can use a separate neural network to produce previews with almost none VRAM or compute requirement
  • Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
  • Styles, a way to save part of prompt and easily apply them via dropdown later
  • Variations, a way to generate same image but with tiny differences
  • Seed resizing, a way to generate same image but at slightly different resolution
  • CLIP interrogator, a button that tries to guess prompt from an image
  • Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
  • Batch Processing, process a group of files using img2img
  • Img2img Alternative, reverse Euler method of cross attention control
  • Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
  • Reloading checkpoints on the fly
  • Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
  • Custom scripts with many extensions from community
  • Composable-Diffusion, a way to use multiple prompts at once
    • separate prompts using uppercase AND
    • also supports weights for prompts: a cat :1.2 AND a dog AND a penguin :2.2
  • No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
  • DeepDanbooru integration, creates danbooru style tags for anime prompts
  • xformers, major speed increase for select cards: (add --xformers to commandline args)
  • via extension: History tab: view, direct and delete images conveniently within the UI
  • Generate forever option
  • Training tab
    • hypernetworks and embeddings options
    • Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
  • Clip skip
  • Hypernetworks
  • Loras (same as Hypernetworks but more pretty)
  • A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
  • Can select to load a different VAE from settings screen
  • Estimated completion time in progress bar
  • API
  • Support for dedicated inpainting model by RunwayML
  • via extension: Aesthetic Gradients, a way to generate images with a specific aesthetic by using clip images embeds (implementation of https://github.com/vicgalle/stable-diffusion-aesthetic-gradients)
  • Stable Diffusion 2.0 support - see wiki for instructions
  • Alt-Diffusion support - see wiki for instructions
  • Now without any bad letters!
  • Load checkpoints in safetensors format
  • Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
  • Now with a license!
  • Reorder elements in the UI from settings screen

Installation and Running

Make sure the required dependencies are met and follow the instructions available for both NVidia (recommended) and AMD GPUs.

Alternatively, use online services (like Google Colab):

Automatic Installation on Windows

  1. Install Python 3.10.6 (Newer version of Python does not support torch), checking "Add Python to PATH".
  2. Install git.
  3. Download the stable-diffusion-webui repository, for example by running git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git.
  4. Run webui-user.bat from Windows Explorer as normal, non-administrator, user.

Automatic Installation on Linux

  1. Install the dependencies:
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
  1. Navigate to the directory you would like the webui to be installed and execute the following command:
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
  1. Run webui.sh.
  2. Check webui-user.sh for options.

Installation on Apple Silicon

Find the instructions here.

Contributing

Here's how to add code to this repo: Contributing

Documentation

The documentation was moved from this README over to the project's wiki.

Credits

Licenses for borrowed code can be found in Settings -> Licenses screen, and also in html/licenses.html file.